id
stringclasses
179 values
question
stringlengths
8.75k
85.9k
answer
dict
1912.11602
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Make Lead Bias in Your Favor: A Simple and Effective Method for News Summarization <<<Abstract>>> Lead bias is a common phenomenon in news summarization, where early parts of an article often contain the most salient information. While many algorithms exploit this fact in summary generation, it has a detrimental effect on teaching the model to discriminate and extract important information. We propose that the lead bias can be leveraged in a simple and effective way in our favor to pretrain abstractive news summarization models on large-scale unlabeled corpus: predicting the leading sentences using the rest of an article. Via careful data cleaning and filtering, our transformer-based pretrained model without any finetuning achieves remarkable results over various news summarization tasks. With further finetuning, our model outperforms many competitive baseline models. Human evaluations further show the effectiveness of our method. <<</Abstract>>> <<<Introduction>>> The goal of text summarization is to condense a piece of text into a shorter version that contains the salient information. Due to the prevalence of news articles and the need to provide succinct summaries for readers, a majority of existing datasets for summarization come from the news domain BIBREF0, BIBREF1, BIBREF2. However, according to journalistic conventions, the most important information in a news report usually appears near the beginning of the article BIBREF3. While it facilitates faster and easier understanding of the news for readers, this lead bias causes undesirable consequences for summarization models. The output of these models is inevitably affected by the positional information of sentences. Furthermore, the simple baseline of using the top few sentences as summary can achieve a stronger performance than many sophisticated models BIBREF4. It can take a lot of effort for models to overcome the lead bias BIBREF3. Additionally, most existing summarization models are fully supervised and require time and labor-intensive annotations to feed their insatiable appetite for labeled data. For example, the New York Times Annotated Corpus BIBREF1 contains 1.8 million news articles, with 650,000 summaries written by library scientists. Therefore, some recent work BIBREF5 explores the effect of domain transfer to utilize datasets other than the target one. But this method may be affected by the domain drift problem and still suffers from the lack of labelled data. The recent promising trend of pretraining models BIBREF6, BIBREF7 proves that a large quantity of data can be used to boost NLP models' performance. Therefore, we put forward a novel method to leverage the lead bias of news articles in our favor to conduct large-scale pretraining of summarization models. The idea is to leverage the top few sentences of a news article as the target summary and use the rest as the content. The goal of our pretrained model is to generate an abstractive summary given the content. Coupled with careful data filtering and cleaning, the lead bias can provide a delegate summary of sufficiently good quality, and it immediately renders the large quantity of unlabeled news articles corpus available for training news summarization models. We employ this pretraining idea on a three-year collection of online news articles. We conduct thorough data cleaning and filtering. For example, to maintain a quality assurance bar for using leading sentences as the summary, we compute the ratio of overlapping non-stopping words between the top 3 sentences and the rest of the article. As a higher ratio implies a closer semantic connection, we only keep articles for which this ratio is higher than a threshold. We end up with 21.4M articles based on which we pretrain a transformer-based encoder-decoder summarization model. We conduct thorough evaluation of our models on five benchmark news summarization datasets. Our pretrained model achieves a remarkable performance on various target datasets without any finetuning. This shows the effectiveness of leveraging the lead bias to pretrain on large-scale news data. We further finetune the model on target datasets and achieve better results than a number of strong baseline models. For example, the pretrained model without finetuning obtains state-of-the-art results on DUC-2003 and DUC-2004. The finetuned model obtains 3.2% higher ROUGE-1, 1.6% higher ROUGE-2 and 2.1% higher ROUGE-L scores than the best baseline model on XSum dataset BIBREF2. Human evaluation results also show that our models outperform existing baselines like pointer-generator network. The rest of paper is organized as follows. We introduce related work in news summarization and pretraining in Sec:rw. We describe the details of pretraining using lead bias in Sec:pre. We introduce the transformer-based summarization model in Sec:model. We show the experimental results in Sec:exp and conclude the paper in Sec:conclusion. <<</Introduction>>> <<<Related work>>> <<<Document Summarization>>> End-to-end abstractive text summarization has been intensively studied in recent literature. To generate summary tokens, most architectures take the encoder-decoder approach BIBREF8. BIBREF9 first introduces an attention-based seq2seq model to the abstractive sentence summarization task. However, its output summary degenerates as document length increases, and out-of-vocabulary (OOV) words cannot be efficiently handled. To tackle these challenges, BIBREF4 proposes a pointer-generator network that can both produce words from the vocabulary via a generator and copy words from the source article via a pointer. BIBREF10 utilizes reinforcement learning to improve the result. BIBREF11 uses a content selector to over-determine phrases in source documents that helps constrain the model to likely phrases. BIBREF12 adds Gaussian focal bias and a salience-selection network to the transformer encoder-decoder structure BIBREF13 for abstractive summarization. BIBREF14 randomly reshuffles the sentences in news articles to reduce the effect of lead bias in extractive summarization. <<</Document Summarization>>> <<<Pretraining>>> In recent years, pretraining language models have proved to be quite helpful in NLP tasks. The state-of-the-art pretrained models include ELMo BIBREF15, GPT BIBREF7, BERT BIBREF6 and UniLM BIBREF16. Built upon large-scale corpora, these pretrained models learn effective representations for various semantic structures and linguistic relationships. As a result, pretrained models have been widely used with considerable success in applications such as question answering BIBREF17, sentiment analysis BIBREF15 and passage reranking BIBREF18. Furthermore, UniLM BIBREF16 leverages its sequence-to-sequence capability for abstractive summarization; the BERT model has been employed as an encoder in BERTSUM BIBREF19 for extractive/abstractive summarization. Compared to our work, UniLM BIBREF16 is a general language model framework and does not take advantage of the special semantic structure of news articles. Similarly, BERTSUM BIBREF19 directly copies the pretrained BERT structure into its encoder and finetunes on labelled data instead of pretraining with the large quantity of unlabeled news corpus available. Recently, PEGASUS BIBREF20 leverages a similar idea of summarization pretraining, but they require finetuning with data from target domains, whereas our model has a remarkable performance without any finetuning. <<</Pretraining>>> <<</Related work>>> <<<Pretraining with Leading Sentences>>> News articles usually follow the convention of placing the most important information early in the content, forming an inverted pyramid structure. This lead bias has been discovered in a number of studies BIBREF3, BIBREF14. One of the consequences is that the lead baseline, which simply takes the top few sentences as the summary, can achieve a rather strong performance in news summarization. For instance, in the CNN/Daily Mail dataset BIBREF0, using the top three sentences as summaries can get a higher ROUGE score than many deep learning based models. This positional bias brings lots of difficulty for models to extract salient information from the article and generate high-quality summaries. For instance, BIBREF14 discovers that most models' performances drop significantly when a random sentence is inserted in the leading position, or when the sentences in a news article are shuffled. On the other hand, news summarization, just like many other supervised learning tasks, suffers from the scarcity of labelled training data. Abstractive summarization is especially data-hungry since the efficacy of models depends on high-quality handcrafted summaries. We propose that the lead bias in news articles can be leveraged in our favor to train an abstractive summarization model without human labels. Given a news article, we treat the top three sentences, denoted by Lead-3, as the target summary, and use the rest of the article as news content. The goal of the summarization model is to produce Lead-3 using the following content, as illustrated in fig:top3. The benefit of this approach is that the model can leverage the large number of unlabeled news articles for pretraining. In the experiment, we find that the pretrained model alone can have a strong performance on various news summarization datasets, without any further training. We also finetune the pretrained model on downstream datasets with labelled summaries. The model can quickly adapt to the target domain and further increase its performance. It is worth noting that this idea of utilizing structural bias for large-scale summarization pretraining is not limited to specific types of models, and it can be applied to other types of text as well: academic papers with abstracts, novels with editor's notes, books with tables of contents. However, one should carefully examine and clean the source data to take advantage of lead bias, as the top three sentences may not always form a good summary. We provide more details in the experiments about the data filtering and cleaning mechanism we apply. <<</Pretraining with Leading Sentences>>> <<<Model>>> In this section, we introduce our abstractive summarization model, which has a transformer-based encoder-decoder structure. We first formulate the supervised summarization problem and then present the network architecture. <<<Problem formulation>>> We formalize the problem of supervised abstractive summarization as follows. The input consists of $a$ pairs of articles and summaries: $\lbrace (X_1, Y_1), (X_2, Y_2), ..., (X_a, Y_a)\rbrace $. Each article and summary are tokenized: $X_i=(x_1,...,x_{L_i})$ and $Y_i=(y_1,...,y_{N_i})$. In abstractive summarization, the summary tokens need not be from the article. For simplicity, we will drop the data index subscript. The goal of the system is to generate summary $Y=(y_1,...,y_m)$ given the transcript $X=\lbrace x_1, ..., x_n\rbrace $. <<</Problem formulation>>> <<<Network Structure>>> We utilize a transformer-based encoder-decoder structure that maximizes the conditional probability of the summary: $P(Y|X, \theta )$, where $\theta $ represents the parameters. <<<Encoder>>> The encoder maps each token into a fixed-length vector using a trainable dictionary $\mathcal {D}$ randomly initialized using a normal distribution with zero mean and a standard deviation of 0.02. Each transformer block conducts multi-head self-attention. And we use sinusoidal positional embedding in order to process arbitrarily long input. In the end, the output of the encoder is a set of contextualized vectors: <<</Encoder>>> <<<Decoder>>> The decoder is a transformer that generates the summary tokens one at a time, based on the input and previously generated summary tokens. Each token is projected onto a vector using the same dictionary $\mathcal {D}$ as the encoder. The decoder transformer block includes an additional cross-attention layer to fuse in information from the encoder. The output of the decoder transformer is denoted as: To predict the next token $w_{k}$, we reuse the weights of dictionary $\mathcal {D}$ as the final linear layer to decode $u^D_{k-1}$ into a probability distribution over the vocabulary: $P(w_k|w_{<k},u^E_{1:m})=( \mathcal {D}u^D_{k-1})$. Training. During training, we seek to minimize the cross-entropy loss: We use teacher-forcing in decoder training, i.e. the decoder takes ground-truth summary tokens as input. The model has 10 layers of 8-headed transformer blocks in both its encoder and decoder, with 154.4M parameters. Inference. During inference, we employ beam search to select the best candidate. The search starts with the special token $\langle \mbox{BEGIN}\rangle $. We ignore any candidate word which results in duplicate trigrams. We select the summary with the highest average log-likelihood per token. <<</Decoder>>> <<</Network Structure>>> <<</Model>>> <<<Experiments>>> <<<Datasets>>> We evaluate our model on five benchmark summarization datasets: the New York Times Annotated Corpus (NYT) BIBREF1, XSum BIBREF2, the CNN/DailyMail dataset BIBREF0, DUC-2003 and DUC-2004 BIBREF21. These datasets contain 104K, 227K, 312K, 624 and 500 news articles and human-edited summaries respectively, covering different topics and various summarization styles. For NYT dataset, we use the same train/val/test split and filtering methods following BIBREF22. As DUC-2003/2004 datasets are very small, we follow BIBREF23 to employ them as test set only. <<</Datasets>>> <<<Implementation Details>>> We use SentencePiece BIBREF24 for tokenization, which segments any sentence into subwords. We train the SentencePiece model on pretrained data to generate a vocabulary of size 32K and of dimension 720. The vocabulary stays fixed during pretraining and finetuning. Pretraining. We collect three years of online news articles from June 2016 to June 2019. We filter out articles overlapping with the evaluation data on media domain and time range. We then conduct several data cleaning strategies. First, many news articles begin with reporter names, media agencies, dates or other contents irrelevant to the content, e.g. “New York (CNN) –”, “Jones Smith, May 10th, 2018:”. We therefore apply simple regular expressions to remove these prefixes. Second, to ensure that the summary is concise and the article contains enough salient information, we only keep articles with 10-150 words in the top three sentences and 150-1200 words in the rest, and that contain at least 6 sentences in total. In this way, we filter out i) articles with excessively long content to reduce memory consumption; ii) very short leading sentences with little information which are unlikely to be a good summary. To encourage the model to generate abstrative summaries, we also remove articles where any of the top three sentences is exactly repeated in the rest of the article. Third, we try to remove articles whose top three sentences may not form a relevant summary. For this purpose, we utilize a simple metric: overlapping words. We compute the portion of non-stopping words in the top three sentences that are also in the rest of an article. A higher portion implies that the summary is representative and has a higher chance of being inferred by the model using the rest of the article. To verify, we compute the overlapping ratio of non-stopping words between human-edited summary and the article in CNN/DailyMail dataset, which has a median value of 0.87. Therefore, in pretraining, we keep articles with an overlapping word ratio higher than 0.65. These filters rule out around 95% of the raw data and we end up with 21.4M news articles, 12,000 of which are randomly sampled for validation. We pretrain the model for 10 epochs and evaluate its performance on the validation set at the end of each epoch. The model with the highest ROUGE-L score is selected. During pretraining, we use a dropout rate of 0.3 for all inputs to transformer layers. The batch size is 1,920. We use RAdam BIBREF25 as the optimizer, with a learning rate of $10^{-4}$. Also, due to the different numerical scales of the positional embedding and initialized sentence piece embeddings, we divide the positional embedding by 100 before feeding it into the transformer. The beam width is set to 5 during inference. Finetuning. During finetuning, we keep the optimizer, learning rate and dropout rate unchanged as in pretraining. The batch size is 32 for all datasets. We pick the model with the highest ROUGE-L score on the validation set and report its performance on the test set. Our strategy of Pretraining with unlabeled Lead-3 summaries is called PL. We denote the pretrained model with finetuning on target datasets as PL-FT. The model with only pretraining and no finetuning is denoted as PL-NoFT, which is the same model for all datasets. <<</Implementation Details>>> <<<Baseline>>> To compare with our model, we select a number of strong summarization models as baseline systems. $\textsc {Lead-X}$ uses the top $X$ sentences as a summary BIBREF19. The value of $X$ is 3 for NYT and CNN/DailyMail and 1 for XSum to accommodate the nature of summary length. $\textsc {PTGen}$ BIBREF4 is the pointer-generator network. $\textsc {DRM}$ BIBREF10 leverages deep reinforcement learning for summarization. $\textsc {TConvS2S}$ BIBREF2 is based on convolutional neural networks. $\textsc {BottomUp}$ BIBREF11 uses a bottom-up approach to generate summarization. ABS BIBREF26 uses neural attention for summary generation. DRGD BIBREF27 is based on a deep recurrent generative decoder. To compare with our pretrain-only model, we include several unsupervised abstractive baselines: SEQ$^3$ BIBREF28 employs the reconstruction loss and topic loss for summarization. BottleSum BIBREF23 leverages unsupervised extractive and self-supervised abstractive methods. GPT-2 BIBREF7 is a large-scaled pretrained language model which can be directly used to generate summaries. <<</Baseline>>> <<<Metrics>>> We employ the standard ROUGE-1, ROUGE-2 and ROUGE-L metrics BIBREF29 to evaluate all summarization models. These three metrics respectively evaluate the accuracy on unigrams, bigrams and longest common subsequence. ROUGE metrics have been shown to highly correlate with the human judgment BIBREF29. Following BIBREF22, BIBREF23, we use F-measure ROUGE on XSUM and CNN/DailyMail, and use limited-length recall-measure ROUGE on NYT and DUC. In NYT, the prediction is truncated to the length of the ground-truth summaries; in DUC, the prediction is truncated to 75 characters. <<</Metrics>>> <<<Results>>> The results are displayed in tab:nyt, tab:xsumresults, tab:cnndaily and tab:duc. As shown, on both NYT and XSum dataset, PL-FT outperforms all baseline models by a large margin. For instance, PL-FT obtains 3.2% higher ROUGE-1, 1.6% higher ROUGE-2 and 2.1% higher ROUGE-L scores than the best baseline model on XSum dataset. We conduct statistical test and found that the results are all significant with p-value smaller than 0.05 (marked by *) or 0.01 (marked by **), compared with previous best scores. On CNN/DailyMail dataset, PL-FT outperforms all baseline models except BottomUp BIBREF11. PL-NoFT, the pretrained model without any finetuning, also gets remarkable results. On XSum dataset, PL-NoFT is almost 8% higher than Lead-1 in ROUGE-1 and ROUGE-L. On CNN/DailyMail dataset, PL-NoFT significantly outperforms unsupervised models SEQ$^3$ and GPT-2, and even surpasses the supervised pointer-generator network. PL-NoFT also achieves state-of-the-art results on DUC-2003 and DUC-2004 among unsupervised models (except ROUGE-1 on DUC-2004), outperforming other carefully designed unsupervised summarization models. It's worth noting that PL-NoFT is the same model for all experiments, which proves that our pretrain strategy is effective across different news corpus. <<</Results>>> <<<Abstractiveness Analysis>>> We measure the abstractiveness of our model via the ratio of novel n-grams in summaries, i.e. the percentage of n-grams in the summary that are not present in the article. fig:novel shows this ratio in summaries from reference and generated by PL-NoFT and PL-FT in NYT dataset. Both PL-NoFT and PL-FT yield more novel 1-grams in summary than the reference. And PL-NoFT has similar novelty ratio with the reference in other n-gram categories. Also, we observe that the novelty ratio drops after finetuning. We attribute this to the strong lead bias in the NYT dataset which affects models trained on it. <<</Abstractiveness Analysis>>> <<<Human Evaluation>>> We conduct human evaluation of the generated summaries from our models and the pointer generator network with coverage. We randomly sample 100 articles from the CNN/DailyMail test set and ask 3 human labelers from Amazon Mechanical Turk to assess the quality of summaries with a score from 1 to 5 (5 means perfect quality. The labelers need to judge whether the summary can express the salient information from the article in a concise form of fluent language. The evaluation guidelines are given in Table TABREF23. To reduce bias, we randomly shuffle summaries from different sources for each article. As shown in Table TABREF23, both of our models PL-NoFT and PL-FT outperform the pointer generator network (PTGen+Cov), and PL-FT's advantage over PTGen+Cov is statistically significant. This shows the effectiveness of both our pretraining and finetuning strategy. To evaluate the inter-annotator agreement, we compute the kappa statistics among the labels and the score is 0.34. <<</Human Evaluation>>> <<</Experiments>>> <<<Conclusions>>> In this paper, we propose a simple and effective pretraining method for news summarization. By employing the leading sentences from a news article as its target summary, we turn the problematic lead bias for news summarization in our favor. Based on this strategy, we conduct pretraining for abstractive summarization in a large-scale news corpus. We conduct thorough empirical tests on five benchmark news summarization datasets, including both automatic and human evaluations. Results show that the same pretrained model without any finetuning can achieve state-of-the-art results among unsupervised methods over various news summarization datasets. And finetuning on target domains can further improve the model's performance. We argue that this pretraining method can be applied in more scenarios where structural bias exists. <<</Conclusions>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nRelated work\nDocument Summarization\nPretraining\nPretraining with Leading Sentences\nModel\nProblem formulation\nNetwork Structure\nEncoder\nDecoder\nExperiments\nDatasets\nImplementation Details\nBaseline\nMetrics\nResults\nAbstractiveness Analysis\nHuman Evaluation\nConclusions" ], "type": "outline" }
1911.01680
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Improving Slot Filling by Utilizing Contextual Information <<<Abstract>>> Slot Filling is the task of extracting the semantic concept from a given natural language utterance. Recently it has been shown that using contextual information, either in work representations (e.g., BERT embedding) or in the computation graph of the model, could improve the performance of the model. However, recent work uses the contextual information in a restricted manner, e.g., by concatenating the word representation and its context feature vector, limiting the model from learning any direct association between the context and the label of word. We introduce a new deep model utilizing the contextual information for each work in the given sentence in a multi-task setting. Our model enforce consistency between the feature vectors of the context and the word while increasing the expressiveness of the context about the label of the word. Our empirical analysis on a slot filling dataset proves the superiority of the model over the baselines. <<</Abstract>>> <<<Introduction>>> Slot Filling (SF) is the task of identifying the semantic concept expressed in natural language utterance. For instance, consider a request to edit an image expressed in natural language: “Remove the blue ball on the table and change the color of the wall to brown”. Here, the user asks for an "Action" (i.e., removing) on one “Object” (blue ball on the table) in the image and changing an “Attribute” (i.e., color) of the image to new “Value” (i.e., brown). Our goal in SF is to provide a sequence of labels for the given sentence to identify the semantic concept expressed in the given sentence. Prior work have shown that contextual information could be useful for SF. They utilize contextual information either in word level representation (i.e., via contextualize embedding e.g., BERT BIBREF0) or in the model computation graph (e.g., concatenating the context feature to the word feature BIBREF1). However, such methods fail to capture the explicit dependence between the context of the word and its label. Moreover, such limited use of contextual information (i.e., concatenation of the feature vector and context vector) in the model cannot model the interaction between the word representation and its context. In order to alleviate these issues, in this work, we propose a novel model to explicitly increase the predictability of the word label using its context and increasing the interactivity between word representations and its context. More specifically, in our model we use the context of the word to predict its label and by doing so our model learns label-aware context for each word in the sentence. In order to improve the interactivity between the word representation and its context, we increase the mutual information between the word representations and its context. In addition to these contributions, we also propose an auxiliary task to predict which labels are expressed in a given sentence. Our model is trained in a mutli-tasking framework. Our experiments on a SF dataset for identifying semantic concepts from natural language request to edit an image show the superiority of our model compared to previous baselines. Our model achieves the state-of-the-art results on the benchmark dataset by improving the F1 score by almost 2%, which corresponds to a 12.3% error rate reduction. <<</Introduction>>> <<<Related Work>>> The task of Slot Filling is formulated as a sequence labeling problem. Deep learning has been extensively employed for this task (BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11). The prior work has mainly utilized the recurrent neural network as the encoder to extract features per word and Conditional Random Field (CRF) BIBREF12 as the decoder to generate the labels per word. Recently the work BIBREF1 shows that the global context of the sentence could be useful to enhance the performance of neural sequence labeling. In their approach, they use a separate sequential model to extract word features. Afterwards, using max pooling over the representations of the words, they obtain the sentence representations and concatenate it to the word embedding as the input to the main task encoder (i.e. the RNN model to perform sequence labeling). The benefit of using the global context along the word representation is 2-fold: 1) it enhance the representations of the word by the semantics of the entire sentence thus the word representation are more contextualized 2) The global view of the sentence would increase the model performance as it contains information about the entire sentence and this information might not be encoded in word representations due to long decencies. However, the simple concatenation of the global context and the word embeddings would not separately ensure these two benefits of the global context. In order to address this problem, we introduce a multi-task setting to separately ensure the aforementioned benefits of utilizing contextual information. In particular, to ensure the better contextualized representations of the words, the model is encourage to learn representations for the word which are consistent with its context. This is achieved via increasing the mutual information between the word representation and its context. To ensure the usefulness of the contextual information for the final task, we introduce two novel sub-tasks. The first one aims to employ the context of the word instead of the word representation to predict the label of the word. In the second sub-task, we use the global representation of the sentence to predict which labels exist in the given sentence in a multi-label classification setting. These two sub-tasks would encourage the contextual representations to be informative for both word level classification and sentence level classification. <<</Related Work>>> <<<Model>>> Our model is trained in a multi-task setting in which the main task is slot filling to identify the best possible sequence of labels for the given sentence. In the first auxiliary task we aim to increase consistency between the word representation and its context. The second auxiliary task is to enhance task specific information in contextual information. In this section, we explain each of these tasks in more details. <<<Slot Filling>>> The input to the model is a sequence of words $x_1,x_2,...,x_N$. The goal is to assign each word one of the labels action, object, attribute, value or other. Following other methods for sequence labelling, we use the BIO encoding schema. In addition to the sequence of words, the part-of-speech (POS) tags and the dependency parse tree of the input are given to the model. The input word $x_i$ is represented by the concatenation of its pre-trained word embedding and its POS tag embedding, denoted by $e_i$. These representations are further abstracted using a 2-layer Bi-Directional Long Short-Term Memory (LSTM) to obtain feature vector $h_i$. We use the dependency tree of the sentence to utilize the syntactical information about the input text. This information could be useful to identify the important words and their dependents in the sentence. In order to model the syntactic tree, we utilize Graph Convolutional Network (GCN) BIBREF13 over the dependency tree. This model learns the contextualized representations of the words such that the representation of each word is contextualized by its neighbors. We employ 2-layer GCN with $h_i$ as the initial representation for the node (i.e., word) $i$th. The representations of the $i$th node is an aggregation of the representations of its neighbors. Formally the hidden representations of the $i$th word in $l$th layer of GCN is obtained by: where $N(i)$ is the neighbors of the $i$th word in the dependency tree, $W_l$ is the weight matrix in $l$th layer and $deg(i)$ is the degree of the $i$th word in the dependency tree. The biases are omitted for brevity. The final representations of the GCN for $i$th word, $\hat{h}_i$, represent the structural features for that word. Afterwards, we concatenate the structural features $\hat{h}_i$ and sequential features $h_i$ to represent $i$th word by feature vector $h^{\prime }_i$: Finally in order to label each word in the sentence we employ a task specific 2-layer feed forward neural net followed by a logistic regression model to generate class scores $S_i$ for each word: where $W_{LR}, W_1$ and $W_2$ are trainable parameters and $S_i$ is a vector of size number of classes in which each dimension of it is the score for the corresponding class. Since the main task is sequence labeling we exploit Conditional Random Field (CRF) as the final layer to predict the sequence of labels for the given sentence. More specifically, class scores $S_i$ are fed into the CRF layer as emission scores to obtain the final labeling score: where $T$ is the trainable transition matrix and $\theta $ is the parameters of the model to generate emission scores $S_i$. Viterbi loss $L_{VB}$ is used as the final loss function to be optimized during training. In the inference time, the Viterbi decoder is employed to find the sequence of labels with highest score. <<</Slot Filling>>> <<<Consistency with Contextual Representation>>> In this sub-task we aim to increase the consistency of the word representation and its context. To obtain the context of each word we perform max pooling over the all words of the sentence excluding the word itself: where $h_i$ is the representation of the $i$th word from the Bi-LSTM. We aim to increase the consistency between vectors $h_i$ and $h^c_i$. One way to achieve this is by decreasing the distance between these two vectors. However, directly enforcing the word representation and its context to be close to each other would not be efficient as in long sentences the context might substantially differs from the word. So in order to make enough room for the model to represent the context of each word while it is consistent with the word representation, we employ an indirect method. We propose to maximize the mutual information (MI) between the word representation and its context in the loss function. In information theory, MI evaluates how much information we know about one random variable if the value of another variable is revealed. Formally, the mutual information between two random variable $X_1$ and $X_2$ is obtained by: Using this definition of MI, we can reformulate the MI equation as KL Divergence between the joint distribution $P_{X_1X_2}=P(X_1,X_2)$ and the product of marginal distributions $P_{X_1\bigotimes X_2}=P(X_1)P(X_2)$: Based on this understanding of MI, we can see that if the two random variables are dependent then the mutual information between them (i.e. the KL-Divergence in equation DISPLAY_FORM9) would be the highest. Consequently, if the representations $h_i$ and $h^c_i$ are encouraged to have large mutual information, we expect them to share more information. The mutual information would be introduced directly into the loss function for optimization. One issue with this approach is that the computation of the MI for such high dimensional continuous vectors as $h_i$ and $h^c_i$ is prohibitively expensive. In this work, we propose to address this issue by employing the mutual information neural estimation (MINE) in BIBREF14 that seeks to estimate the lower bound of the mutual information between the high dimensional vectors via adversarial training. To this goal, MINE attempts to compute the lower bound of the KL divergence between the joint and marginal distributions of the given high dimensional vectors/variables. In particular, MINE computes the lower bound of the Donsker-Varadhan representation of KL-Divergence: However, recently, it has been shown that other divergence metrics (i.e., the Jensen-Shannon divergence) could also be used for this purpose BIBREF15, BIBREF16, offering simpler methods to compute the lower bound for the MI. Consequently, following such methods, we apply the adversarial approach to obtain the MI lower bound via the binary cross entropy of a variable discriminator. This discriminator differentiates the variables that are sampled from the joint distribution from those that are sampled from product of the marginal distributions. In our case, the two variables are the word representation $h_i$ and context representation $h^c_i$. In order to sample from joint distributions, we simply concatenate $h_i$ and $h^c_i$ (i.e., the positive example). To sample from the product of the marginal distributions, we concatenate the representation $h_i$ with $h^c_j$ where $i\ne j$ (i.e., the negative example). These samples are fed into a 2-layer feed forward neural network $D$ (i.e., the discriminator) to perform a binary classification (i.e., coming from the joint distribution or the product of the marginal distributions). Finally, we use the following binary cross entropy loss to estimate the mutual information between $h_i$ and $h^c_i$ to add into the overall loss function: where $N$ is the length of the sentence and $[h,h^c_i]$ is the concatenation of the two vectors $h$ and $h^c_i$. This loss is added to the final loss function of the model. <<</Consistency with Contextual Representation>>> <<<Prediction by Contextual Information>>> In addition to increasing consistency between the word representation and its context representation, we aim to increase the task specific information in contextual representations. This is desirable as the main task is utilizing the word representation to predict its label. Since our model enforce the consistency between the word representation and its context, increasing the task specific information in contextual representations would help the model's final performance. In order to increase task-specific information in contextual representation, we train the model on two auxiliary tasks. The first one aims to use the context of each word to predict the label of that word and the goal of the second auxiliary task is to use the global context information to predict sentence level labels. We describe each of these tasks in more details in the following sections. <<<Predicting Word Label>>> In this sub-task we use the context representations of each word to predict its label. It will increase the information encoded in the context of the word about the label of the word. We use the same context vector $h^c_i$ for the $i$th word as described in the previous section. This vector is fed into a 2-layer feed forward neural network with a softmax layer at the end to output the probabilities for each class: Where $W_2$ and $W_1$ are trainable parameters. Biases are omitted for brevity. Finally we use the following cross-entropy loss function to be optimized during training: where $N$ is the length of the sentence and $l_i$ is the label of the $i$th word. <<</Predicting Word Label>>> <<<Predicting Sentence Labels>>> The word label prediction enforces the context of each word to contain information about its label but it would not ensure the contextual information to capture the sentence level patterns for expressing intent. In other words, the word level prediction lacks a general view about the entire sentence. In order to increase the general information about the sentence in the representation of the words, we aim to predict the labels existing in a sentence from the representations of its words. More specifically, we introduce a new sub-task to predict which labels exit in the given sentence (Note that sentences might have only a subset of the labels; e.g. only action and object). We formulate this task as a multi-class classification problem. Formally, given the sentence $X=x_1,x_2,...,x_N$ and label set $S=\lbrace action, attribute, object, value\rbrace $ our goal is to predict the vector $L^s=l^s_1,l^s_2,...,l^s_{|S|}$ where $l^s_i$ is one if the sentence $X$ contains $i$th label from the label set $S$ otherwise it is zero. First, we find representation of the sentence from the word representations. To this end, we use max pooling over all words of the sentence to obtain vector $H$: Afterwards, the vector $H$ is further abstracted by a 2-layer feed forward neural net with a sigmoid function at the end: where $W_2$ and $W_1$ are trainable parameters. Note that since this tasks is a multi-class classification the number of neurons at the final layer is equal to $|S|$. We optimize the following binary cross entropy loss function: where $l_k$ is one if the sentence contains the $k$th label otherwise it is zero. Finally, to train the model we optimize the following loss function: where $\alpha $, $\beta $ and $\gamma $ are hyper parameters to be tuned using development set performance. <<</Predicting Sentence Labels>>> <<</Prediction by Contextual Information>>> <<</Model>>> <<<Experiments>>> In our experiments, we use Onsei Intent Slot dataset. Table TABREF21 shows the statics of this dataset. We use the following hyper parameters in our model: We set the word embedding and POS embedding to 768 and 30 respectively; The pre-trained BERT BIBREF17 embedding are used to initialize word embeddings; The hidden dimension of the Bi-LSTM, GCN and feed forward networks are 200; the hyper parameters $\alpha $, $\beta $ and $\gamma $ are all set to 0.1; We use Adam optimizer with learning rate 0.003 to train the model. We use micro-averaged F1 score on all labels as the evaluation metric. We compare our method with the models trained using Adobe internal NLU tool, Pytext BIBREF18 and Rasa BIBREF19 NLU tools. Table TABREF22 shows the results on Test set. Our model improves the F1 score by almost 2%, which corresponds to a 12.3% error rate reduction. This improvements proves the effectiveness of using contextual information for the task of slot filling. In order to analyze the contribution of the proposed sub-tasks we also evaluate the model when we remove one of the sub-task and retrain the model. The results are reported in Table TABREF23. This table shows that all sub-tasks are required for the model to have its best performance. Among all sub-tasks the word level prediction using the contextual information has the major contribution to the model performance. This fact shows that contextual information trained to be informative about the final sub-task is necessary to obtain the representations which could boost the final model performance. <<</Experiments>>> <<<Conclusion & Future Work>>> In this work we introduce a new deep model for the task of Slot Filling. In a multi-task setting, our model increase the mutual information between word representations and its context, improve the label information in the context and predict which concepts are expressed in the given sentence. Our experiments on an image edit request corpus shows that our model achieves state-of-the-art results on this dataset. <<</Conclusion & Future Work>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nRelated Work\nModel\nSlot Filling\nConsistency with Contextual Representation\nPrediction by Contextual Information\nPredicting Word Label\nPredicting Sentence Labels\nExperiments\nConclusion & Future Work" ], "type": "outline" }
2002.05104
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Component Analysis for Visual Question Answering Architectures <<<Abstract>>> Recent research advances in Computer Vision and Natural Language Processing have introduced novel tasks that are paving the way for solving AI-complete problems. One of those tasks is called Visual Question Answering (VQA). A VQA system must take an image and a free-form, open-ended natural language question about the image, and produce a natural language answer as the output. Such a task has drawn great attention from the scientific community, which generated a plethora of approaches that aim to improve the VQA predictive accuracy. Most of them comprise three major components: (i) independent representation learning of images and questions; (ii) feature fusion so the model can use information from both sources to answer visual questions; and (iii) the generation of the correct answer in natural language. With so many approaches being recently introduced, it became unclear the real contribution of each component for the ultimate performance of the model. The main goal of this paper is to provide a comprehensive analysis regarding the impact of each component in VQA models. Our extensive set of experiments cover both visual and textual elements, as well as the combination of these representations in form of fusion and attention mechanisms. Our major contribution is to identify core components for training VQA models so as to maximize their predictive performance. <<</Abstract>>> <<<Introduction>>> Recent research advances in Computer Vision (CV) and Natural Language Processing (NLP) introduced several tasks that are quite challenging to be solved, the so-called AI-complete problems. Most of those tasks require systems that understand information from multiple sources, i.e., semantics from visual and textual data, in order to provide some kind of reasoning. For instance, image captioning BIBREF0, BIBREF1, BIBREF2 presents itself as a hard task to solve, though it is actually challenging to quantitatively evaluate models on that task, and that recent studies BIBREF3 have raised questions on its AI-completeness. The Visual Question Answering (VQA) BIBREF3 task was introduced as an attempt to solve that issue: to be an actual AI-complete problem whose performance is easy to evaluate. It requires a system that receives as input an image and a free-form, open-ended, natural-language question to produce a natural-language answer as the output BIBREF3. It is a multidisciplinary topic that is gaining popularity by encompassing CV and NLP into a single architecture, what is usually regarded as a multimodal model BIBREF4, BIBREF5, BIBREF6. There are many real-world applications for models trained for Visual Question Answering, such as automatic surveillance video queries BIBREF7 and visually-impaired aiding BIBREF8, BIBREF9. Models trained for VQA are required to understand the semantics from images while finding relationships with the asked question. Therefore, those models must present a deep understanding of the image to properly perform inference and produce a reasonable answer to the visual question BIBREF10. In addition, it is much easier to evaluate this task since there is a finite set of possible answers for each image-question pair. Traditionally, VQA approaches comprise three major steps: (i) representation learning of the image and the question; (ii) projection of a single multimodal representation through fusion and attention modules that are capable of leveraging both visual and textual information; and (iii) the generation of the natural language answer to the question at hand. This task often requires sophisticated models that are able to understand a question expressed in text, identify relevant elements of the image, and evaluate how these two inputs correlate. Given the current interest of the scientific community in VQA, many recent advances try to improve individual components such as the image encoder, the question representation, or the fusion and attention strategies to better leverage both information sources. With so many approaches currently being introduced at the same time, it becomes unclear the real contribution and importance of each component within the proposed models. Thus, the main goal of this work is to understand the impact of each component on a proposed baseline architecture, which draws inspiration from the pioneer VQA model BIBREF3 (Fig. FIGREF1). Each component within that architecture is then systematically tested, allowing us to understand its impact on the system's final performance through a thorough set of experiments and ablation analysis. More specifically, we observe the impact of: (i) pre-trained word embeddings BIBREF11, BIBREF12, recurrent BIBREF13 and transformer-based sentence encoders BIBREF14 as question representation strategies; (ii) distinct convolutional neural networks used for visual feature extraction BIBREF15, BIBREF16, BIBREF17; and (iii) standard fusion strategies, as well as the importance of two main attention mechanisms BIBREF18, BIBREF19. We notice that even using a relatively simple baseline architecture, our best models are competitive to the (maybe overly-complex) state-of-the-art models BIBREF20, BIBREF21. Given the experimental nature of this work, we have trained over 130 neural network models, accounting for more than 600 GPU processing hours. We expect our findings to be useful as guidelines for training novel VQA models, and that they serve as a basis for the development of future architectures that seek to maximize predictive performance. <<</Introduction>>> <<<Related Work>>> The task of VAQ has gained attention since Antol et al. BIBREF3 presented a large-scale dataset with open-ended questions. Many of the developed VQA models employ a very similar architecture BIBREF3, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27: they represent images with features from pre-trained convolutional neural networks; they use word embeddings or recurrent neural networks to represent questions and/or answers; and they combine those features in a classification model over possible answers. Despite their wide adoption, RNN-based models suffer from their limited representation power BIBREF28, BIBREF29, BIBREF30, BIBREF31. Some recent approaches have investigated the application of the Transformer model BIBREF32 to tasks that incorporate visual and textual knowledge, as image captioning BIBREF28. Attention-based methods are also being continuously investigated since they enable reasoning by focusing on relevant objects or regions in original input features. They allow models to pay attention on important parts of visual or textual inputs at each step of a task. Visual attention models focus on small regions within an image to extract important features. A number of methods have adopted visual attention to benefit visual question answering BIBREF27, BIBREF33, BIBREF34. Recently, dynamic memory networks BIBREF27 integrate an attention mechanism with a memory module, and multimodal bilinear pooling BIBREF22, BIBREF20, BIBREF35 is exploited to expressively combine multimodal features and predict attention over the image. These methods commonly employ visual attention to find critical regions, but textual attention has been rarely incorporated into VQA systems. While all the aforementioned approaches have exploited those kind of mechanisms, in this paper we study the impact of such choices specifically for the task of VQA, and create a simple yet effective model. Burns et al. BIBREF36 conducted experiments comparing different word embeddings, language models, and embedding augmentation steps on five multimodal tasks: image-sentence retrieval, image captioning, visual question answering, phrase grounding, and text-to-clip retrieval. While their work focuses on textual experiments, our experiments cover both visual and textual elements, as well as the combination of these representations in form of fusion and attention mechanisms. To the best of our knowledge, this is the first paper that provides a comprehensive analysis on the impact of each major component within a VQA architecture. <<</Related Work>>> <<<Impact of VQA Components>>> In this section we first introduce the baseline approach, with default image and text encoders, alongside a pre-defined fusion strategy. That base approach is inspired by the pioneer of Antol et al. on VQA BIBREF3. To understand the importance of each component, we update the base architecture according to each component we are investigating. In our baseline model we replace the VGG network from BIBREF19 by a Faster RCNN pre-trained in the Visual Genome dataset BIBREF37. The default text encoding is given by the last hidden-state of a Bidirectional LSTM network, instead of the concatenation of the last hidden-state and memory cell used in the original work. Fig. FIGREF1 illustrates the proposed baseline architecture, which is subdivided into three major segments: independent feature extraction from (1) images and (2) questions, as well as (3) the fusion mechanism responsible to learn cross-modal features. The default text encoder (denoted by the pink rectangle in Fig. FIGREF1) employed in this work comprises a randomly initialized word-embedding module that takes a tokenized question and returns a continuum vector for each token. Those vectors are used to feed an LSTM network. The last hidden-state is used as the question encoding, which is projected with a linear layer into a $d$-dimensional space so it can be fused along to the visual features. As the default option for the LSTM network, we use a single layer with 2048 hidden units. Given that this text encoding approach is fully trainable, we hereby name it Learnable Word Embedding (LWE). For the question encoding, we explore pre-trained and randomly initialized word-embeddings in various settings, including Word2Vec (W2V) BIBREF12 and GloVe BIBREF11. We also explore the use of hidden-states of Skip-Thoughts Vector BIBREF13 and BERT BIBREF14 as replacements for word-embeddings and sentence encoding approaches. Regarding the visual feature extraction (depicted as the green rectangle in Fig. FIGREF1), we decided to use the pre-computed features proposed in BIBREF19. Such an architecture employs a ResNet-152 with a Faster-RCNN BIBREF15 fine-tuned on the Visual Genome dataset. We opted for this approach due to the fact that using pre-computed features is far more computationally efficient, allowing us to train several models with distinct configurations. Moreover, several recent approaches BIBREF20, BIBREF21, BIBREF38 employ that same strategy as well, making it easier to provide fair comparison to the state-of-the-art approaches. In this study we perform experiments with two additional networks widely used for the task at hand, namely VGG-16 BIBREF16 and ReSNet-101 BIBREF17. Given the multimodal nature of the problem we are dealing with, it is quite challenging to train proper image and question encoders so as to capture relevant semantic information from both of them. Nevertheless, another essential aspect of the architecture is the component that merges them altogether, allowing for the model to generate answers based on both information sources BIBREF39. The process of multimodal fusion consists itself in a research area with many approaches being recently proposed BIBREF20, BIBREF40, BIBREF22, BIBREF41. The fusion module receives the extracted image and query features, and provides multimodal features that theoretically present information that allows the system to answer to the visual question. There are many fusion strategies that can either assume quite simple forms, such as vector multiplication or concatenation, or be really complex, involving multilayered neural networks, tensor decomposition, and bi-linear pooling, just to name a few. Following BIBREF3, we adopt the element-wise vector multiplication (also referred as Hadamard product) as the default fusion strategy. This approach requires the feature representations to be fused to have the same dimensionality. Therefore, we project them using a fully-connected layer to reduce their dimension from 2048 to 1024. After being fused together, the multimodal features are finally passed through a fully-connected layer that provides scores (logits) further converted into probabilities via a softmax function ($S$). We want to maximize the probability $P(Y=y|X=x,Q=q)$ of the correct answer $y$ given the image $X$ and the provided question $Q$. Our models are trained to choose within a set comprised by the 3000 most frequent answers extracted from both training and validation sets of the VQA v2.0 dataset BIBREF42. <<</Impact of VQA Components>>> <<<Experimental Setup>>> <<<Dataset>>> For conducting this study we decided to use the VQA v2.0 dataset BIBREF42. It is one of the largest and most frequently used datasets for training and evaluation of models in this task, being the official dataset used in yearly challenges hosted by mainstream computer vision venues . This dataset enhances the original one BIBREF3 by alleviating bias problems within the data and increasing the original number of instances. VQA v2.0 contains over $200,000$ images from MSCOCO BIBREF43, over 1 million questions and $\approx 11$ million answers. In addition, it has at least two questions per image, which prevents the model from answering the question without considering the input image. We follow VQA v2.0 standards and adopt the official provided splits allowing for fair comparison with other approaches. The splits we use are Validation, Test-Dev, Test-Standard. In this work, results of the ablation experiments are reported on the Validation set, which is the default option used for this kind of experiment. In some experiments we also report the training set accuracy to verify evidence of overfitting due to excessive model complexity. Training data has a total of $443,757$ questions labeled with 4 million answers, while the Test-Dev has a total of $214,354$ questions. Note that the validation size is about 4-fold larger than ImageNet's, which contains about $50,000$ samples. Therefore, one must keep in mind that even small performance gaps might indicate quite significant results improvement. For instance, 1% accuracy gains depict $\approx 2,000$ additional instances being correctly classified. We submit the predictions of our best models to the online evaluation servers BIBREF44 so as to obtain results for the Test-Standard split, allowing for a fair comparison to state-of-the-art approaches. <<</Dataset>>> <<<Evaluation Metric>>> Free and open-ended questions result in a diverse set of possible answers BIBREF3. For some questions, a simple yes or no answer may be sufficient. Other questions, however, may require more complex answers. In addition, it is worth noticing that multiple answers may be considered correct, such as gray and light gray. Therefore, VQA v2.0 provides ten ground-truth answers for each question. These answers were collected from ten different randomly-chosen humans. The evaluation metric used to measure model performance in the open-ended Visual Question Answering task is a particular kind of accuracy. For each question in the input dataset, the model's most likely response is compared to the ten possible answers provided by humans in the dataset associated with that question BIBREF3, and evaluated according to Equation DISPLAY_FORM7. In this approach, the prediction is considered totally correct only if at least 3 out of 10 people provided that same answer. <<</Evaluation Metric>>> <<<Hyper-parameters>>> As in BIBREF20 we train our models in a classification-based manner, in which we minimize the cross-entropy loss calculated with an image-question-answer triplet sampled from the training set. We optimize the parameters of all VQA models using Adamax BIBREF45 optimizer with a base learning rate of $7 \times 10^{-4}$, with exception of BERT BIBREF14 in which we apply a 10-fold reduction as suggested in the original paper. We used a learning rate warm-up schedule in which we halve the base learning rate and linearly increase it until the fourth epoch where it reaches twice its base value. It remains the same until the tenth epoch, where we start applying a 25% decay every two epochs. Gradients are calculated using batch sizes of 64 instances, and we train all models for 20 epochs. <<</Hyper-parameters>>> <<</Experimental Setup>>> <<<Experimental Analysis>>> In this section we show the experimental analysis for each component in the baseline VQA model. We also provide a summary of our findings regarding the impact of each part. Finally, we train a model with all the components that provide top results and compare it against state-of-the-art approaches. <<<Text Encoder>>> In our first experiment, we analyze the impact of different embeddings for the textual representation of the questions. To this end, we evaluate: (i) the impact of word-embeddings (pre-trained, or trained from scratch); and (ii) the role of the temporal encoding function, i.e., distinct RNN types, as well as pre-trained sentence encoders (e.g., Skip-Thoughts, BERT). The word-embedding strategies we evaluate are Learnable Word Embedding (randomly initialized and trained from scratch), Word2Vec BIBREF12, and GloVe BIBREF11. We also use word-level representations from widely used sentence embeddings strategies, namely Skip-Thoughts BIBREF13 and BERT BIBREF14. To do so, we use the hidden-states from the Skip-thoughts GRU network, while for BERT we use the activations of the last layer as word-level information. Those vectors feed an RNN that encodes the temporal sequence into a single global vector. Different types of RNNs are also investigated for encoding textual representation, including LSTM BIBREF46, Bidirectional LSTM BIBREF47, GRU BIBREF48, and Bidirectional GRU. For bidirectional architectures we concatenate both forward and backward hidden-states so as to aggregate information from both directions. Those approaches are also compared to a linear strategy, where we use a fully-connected layer followed by a global average pooling on the temporal dimension. The linear strategy discards any order information so we can demonstrate the role of the recurrent network as a temporal encoder to improve model performance. Figure FIGREF5 shows the performance variation of different types of word-embeddings, recurrent networks, initialization strategies, and the effect of fine-tuning the textual encoder. Clearly, the linear layer is outperformed by any type of recurrent layer. When using Skip-Thoughts the difference reaches $2.22\%$, which accounts for almost $5,000$ instances that the linear model mistakenly labeled. The only case in which the linear approach performed well is when trained with BERT. That is expected since Transformer-based architectures employ several attention layers that present the advantage of achieving the total receptive field size in all layers. While doing so, BERT also encodes temporal information with special positional vectors that allow for learning temporal relations. Hence, it is easier for the model to encode order information within word-level vectors without using recurrent layers. For the Skip-Thoughts vector model, considering that its original architecture is based on GRUs, we evaluate both the randomly initialized and the pre-trained GRU of the original model, described as [GRU] and [GRU (skip)], respectively. We noticed that both options present virtually the same performance. In fact, GRU trained from scratch performed $0.13\%$ better than its pre-trained version. Analyzing the results obtained with pre-trained word embeddings, it is clear that GloVe obtained consistently better results than the Word2Vec counterpart. We believe that GloVe vectors perform better given that they capture not only local context statistics as in Word2Vec, but they also incorporate global statistics such as co-occurrence of words. One can also observe that the use of different RNNs models inflicts minor effects on the results. It might be more advisable to use GRU networks since they halve the number of trainable parameters when compared to the LSTMs, albeit being faster and consistently presenting top results. Note also that the best results for Skip-Thoughts, Word2Vec, and GloVe were all quite similar, without any major variation regarding accuracy. The best overall result is achieved when using BERT to extract the textual features. BERT versions using either the linear layer or the RNNs outperformed all other pre-trained embeddings and sentence encoders. In addition, the overall training accuracy for BERT models is not so high compared to all other approaches. That might be an indication that BERT models are less prone to overfit training data, and therefore present better generalization ability. Results make it clear that when using BERT, one must fine-tune it for achieving top performance. Figure FIGREF5 shows that it is possible to achieve a $3\%$ to $4\%$ accuracy improvement when updating BERT weights with $1/10$ of the base learning rate. Moreover, Figure FIGREF6 shows that the use of a pre-training strategy is helpful, once Skip-thoughts and BERT outperform trainable word-embeddings in most of the evaluated settings. Is also make clear that using a single-layered RNNs provide best results, and are far more efficient in terms of parameters. <<</Text Encoder>>> <<<Image Encoder>>> Experiments in this section analyze the visual feature extraction layers. The baseline uses the Faster-RCNN BIBREF15 network, and we will also experiment with other pre-trained neural networks to encode image information so we can observe their impact on predictive performance. Additionally to Faster-RCNN, we experiment with two widely used networks for VQA, namely ResNet-101 BIBREF17 and VGG-16 BIBREF16. Table TABREF11 illustrates the result of this experiment. Intuitively, visual features provide a larger impact on model's performance. The accuracy difference between the best and the worst performing approaches is $\approx 5\%$. That difference accounts for roughly $10,000$ validation set instances. VGG-16 visual features presented the worst accuracy, but that was expected since it is the oldest network used in this study. In addition, it is only sixteen layers deep, and it has been shown that the depth of the network is quite important to hierarchically encode complex structures. Moreover, VGG-16 architecture encodes all the information in a 4096 dimensional vector that is extracted after the second fully-connected layer at the end. That vector encodes little to none spatial information, which makes it almost impossible for the network to answer questions on the spatial positioning of objects. ResNet-101 obtained intermediate results. It is a much deeper network than VGG-16 and it achieves much better results on ImageNet, which shows the difference of the the learning capacity of both networks. ResNet-101 provides information encoded in 2048 dimensional vectors, extracted from the global average pooling layer, which also summarizes spatial information into a fixed-sized representation. The best result as a visual feature extractor was achieved by the Faster-RCNN fine-tuned on the Visual Genome dataset. Such a network employs a ResNet-152 as backbone for training an RPN-based object detector. In addition, given that it was fine-tuned on the Visual Genome dataset, it allows for the training of robust models suited for general feature extraction. Hence, differently from the previous ResNet and VGG approaches, the Faster-RCNN approach is trained to detect objects, and therefore one can use it to extract features from the most relevant image regions. Each region is encoded as a 2048 dimensional vector. They contain rich information regarding regions and objects, since object detectors often operate over high-dimensional images, instead of resized ones (e.g., $256 \times 256$) as in typical classification networks. Hence, even after applying global pooling over regions, the network still has access to spatial information because of the pre-extracted regions of interest from each image. <<</Image Encoder>>> <<<Fusion strategy>>> In order to analyze the impact that the different fusion methods have on the network performance, three simple fusion mechanisms were analyzed: element-wise multiplication, concatenation, and summation of the textual and visual features. The choice of the fusion component is essential in VQA architectures, since its output generates multi-modal features used for answering the given visual question. The resulting multi-modal vector is projected into a 3000-dimensional label space, which provides a probability distribution over each possible answer to the question at hand BIBREF39. Table presents the experimental results with the fusion strategies. The best result is obtained using the element-wise multiplication. Such an approach functions as a filtering strategy that is able to scale down the importance of irrelevant dimensions from the visual-question feature vectors. In other words, vector dimensions with high cross-modal affinity will have their magnitudes increased, differently from the uncorrelated ones that will have their values reduced. Summation does provide the worst results overall, closely followed by the concatenation operator. Moreover, among all the fusion strategies used in this study, multiplication seems to ease the training process as it presents a much higher training set accuracy ($\approx 11\% $ improvement) as well. <<</Fusion strategy>>> <<<Attention Mechanism>>> Finally, we analyze the impact of different attention mechanisms, such as Top-Down Attention BIBREF19 and Co-Attention BIBREF18. These mechanisms are used to provide distinct image representations according to the asked questions. Attention allows the model to focus on the most relevant visual information required to generate proper answers to the given questions. Hence, it is possible to generate several distinct representations of the same image, which also has a data augmentation effect. <<<Top-Down Attention>>> Top-down attention, as the name suggests, uses global features from questions to weight local visual information. The global textual features $\mathbf {q} \in \mathbb {R}^{2048}$ are selected from the last internal state of the RNN, and the image features $V \in \mathbb {R}^{k \times 2048}$ are extracted from the Faster-RCNN, where $k$ represents the number of regions extracted from the image. In the present work we used $k=36$. The question features are linearly projected so as to reduce its dimension to 512, which is the size used in the original paper BIBREF19. Image features are concatenated with the textual features, generating a matrix $C$ of dimensions $k \times 2560$. Features resulting from that concatenation are first non-linearly projected with a trainable weight matrix $W_1^{2560 \times 512}$ generating a novel multimodal representation for each image region: Therefore, such a layer learns image-question relations, generating $k \times 512 $ features that are transformed by an activation function $\phi $. Often, $\phi $ is ReLU BIBREF49, Tanh BIBREF50, or Gated Tanh BIBREF51. The latter employs both the logistic Sigmoid and Tanh, in a gating scheme $\sigma (x) \times \textsc {tanh}(x)$. A second fully-connected layer is employed to summarize the 512-dimensional vectors into $h$ values per region ($k \times h$). It is usual to use a small value for $h$ such as $\lbrace 1, 2\rbrace $. The role of $h$ is to allow the model to produce distinct attention maps, which is useful for understanding complex sentences that require distinct viewpoints. Values produced by this layer are normalized with a softmax function applied on the columns of the matrix, as follows. It generates an attention mask $A^{k \times h}$ used to weight image regions, producing the image vector $\hat{\mathbf {v}}$, as shown in Equation DISPLAY_FORM17. Note that when $h>1$, the dimensionality of the visual features increases $h$-fold. Hence, $\hat{\mathbf {v}}^{h \times 2048}$, which we reshape to be a $(2048\times h)\times 1$ vector, constitutes the final question-aware image representation. <<</Top-Down Attention>>> <<<Co-Attention>>> Unlike the Top-Down attention mechanism, Co-Attention is based on the computation of local similarities between all questions words and image regions. It expects two inputs: an image feature matrix $V^{k \times 2048}$, such that each image feature vector encodes an image region out of $k$; and a set of word-level features $Q^{n \times 2048}$. Both $V$ and $Q$ are normalized to have unit $L_2$ norm, so their multiplication $VQ^T$ results in the cosine similarity matrix used as guidance for generating the filtered image features. A context feature matrix $C^{k \times 2048}$ is given by: Finally, $C$ is normalized with a $\textsc {softmax}$ function, and the $k$ regions are summed so as to generate a 1024-sized vector $\hat{\mathbf {v}}$ to represent relevant visual features $V$ based on question $Q$: Table depicts the results obtained by adding the attention mechanisms to the baseline model. For these experiments we used only element-wise multiplication as fusion strategy, given that it presented the best performance in our previous experiments. We observe that attention is a crucial mechanism for VQA, leading to an $\approx 6\%$ accuracy improvement. The best performing attention approach was Top-Down attention with ReLU activation, followed closely by Co-Attention. We noticed that when using Gated Tanh within Top-Down attention, results degraded 2%. In addition, experiments show that $L_2$ normalization is quite important in Co-Attention, providing an improvement of almost $6\%$. <<</Co-Attention>>> <<</Attention Mechanism>>> <<</Experimental Analysis>>> <<<Findings Summary>>> The experiments presented in Section SECREF9 have shown that the best text encoder approach is fine-tuning a pre-trained BERT model with a GRU network trained from scratch. In Section SECREF10 we performed experiments for analyzing the impact of pre-trained networks to extract visual features, among them Faster-RCNN, ResNet-101, and VGG-16. The best result was using a Faster-RCNN, reaching a $3\%$ improvement in the overall accuracy. We analyzed different ways to perform multimodal feature fusion in Section SECREF12. In this sense, the fusion mechanism that obtained the best result was the element-wise product. It provides $\approx 3\%$ higher overall accuracy when compared to the other fusion approaches. Finally, in Section SECREF13 we have studied two main attention mechanisms and their variations. They aim to provide question-aware image representation by attending to the most important spatial features. The top performing mechanism is the Top-Down attention with the ReLU activation function, which provided an $\approx 6\%$ overall accuracy improvement when compared to the base architecture. <<</Findings Summary>>> <<<Comparison to state-of-the-art methods>>> After evaluating individually each component in a typical VQA architecture, our goal in this section is to compare the approach that combines the best performing components into a single model with the current state-of-the-art in VQA. Our comparison involves the following VQA models: Deeper-lstm-q BIBREF3, MCB BIBREF22, ReasonNet BIBREF52, Tips&Tricks BIBREF53, and the recent block BIBREF20. Tables TABREF21 and show that our best architecture outperforms all competitors but block, in both Test-Standard (Table TABREF21) and Test-Dev sets (Table ). Despite block presenting a marginal advantage in accuracy, we have shown in this paper that by carefully analyzing each individual component we are capable of generating a method, without any bells and whistles, that is on par with much more complex methods. For instance, block and MCB require 18M and 32M parameters respectively for the fusion scheme alone, while our fusion approach is parameter-free. Moreover, our model performs far better than BIBREF22, BIBREF52, and BIBREF53, which are also arguably much more complex methods. <<</Comparison to state-of-the-art methods>>> <<<Conclusion>>> In this study we observed the actual impact of several components within VQA models. We have shown that transformer-based encoders together with GRU models provide the best performance for question representation. Notably, we demonstrated that using pre-trained text representations provide consistent performance improvements across several hyper-parameter configurations. We have also shown that using an object detector fine-tuned with external data provides large improvements in accuracy. Our experiments have demonstrated that even simple fusion strategies can achieve performance on par with the state-of-the-art. Moreover, we have shown that attention mechanisms are paramount for learning top performing networks, once they allow producing question-aware image representations that are capable of encoding spatial relations. It became clear that Top-Down is the preferred attention method, given its results with ReLU activation. It is is now clear that some configurations used in some architectures (e.g., additional RNN layers) are actually irrelevant and can be removed altogether without harming accuracy. For future work, we expect to expand this study in two main ways: (i) cover additional datasets, such as Visual Genome BIBREF37; and (ii) study in an exhaustive fashion how distinct components interact with each other, instead of observing their impact alone on the classification performance. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nRelated Work\nImpact of VQA Components\nExperimental Setup\nDataset\nEvaluation Metric\nHyper-parameters\nExperimental Analysis\nText Encoder\nImage Encoder\nFusion strategy\nAttention Mechanism\nTop-Down Attention\nCo-Attention\nFindings Summary\nComparison to state-of-the-art methods\nConclusion" ], "type": "outline" }
1909.07512
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Short-Text Classification Using Unsupervised Keyword Expansion <<<Abstract>>> Short-text classification, like all data science, struggles to achieve high performance using limited data. As a solution, a short sentence may be expanded with new and relevant feature words to form an artificially enlarged dataset, and add new features to testing data. This paper applies a novel approach to text expansion by generating new words directly for each input sentence, thus requiring no additional datasets or previous training. In this unsupervised approach, new keywords are formed within the hidden states of a pre-trained language model and then used to create extended pseudo documents. The word generation process was assessed by examining how well the predicted words matched to topics of the input sentence. It was found that this method could produce 3-10 relevant new words for each target topic, while generating just 1 word related to each non-target topic. Generated words were then added to short news headlines to create extended pseudo headlines. Experimental results have shown that models trained using the pseudo headlines can improve classification accuracy when limiting the number of training examples. <<</Abstract>>> <<<Introduction>>> The web has provided researchers with vast amounts of unlabeled text data, and enabled the development of increasingly sophisticated language models which can achieve state of the art performance despite having no task specific training BIBREF0, BIBREF1, BIBREF2. It is desirable to adapt these models for bespoke tasks such as short text classification. Short-text is nuanced, difficult to model statistically, and sparse in features, hindering traditional analysis BIBREF3. These difficulties become further compounded when training is limited, as is the case for many practical applications. This paper provides a method to expand short-text with additional keywords, generated using a pre-trained language model. The method takes advantage of general language understanding to suggest contextually relevant new words, without necessitating additional domain data. The method can form both derivatives of the input vocabulary, and entirely new words arising from contextualised word interactions and is ideally suited for applications where data volume is limited. figureBinary Classification of short headlines into 'WorldPost' or 'Crime' categories, shows improved performance with extended pseudo headlines when the training set is small. Using: Random forest classifier, 1000 test examples, 10-fold cross validation. <<</Introduction>>> <<<Literature Review>>> Document expansion methods have typically focused on creating new features with the help of custom models. Word co-occurrence models BIBREF4, topic modeling BIBREF5, latent concept expansion BIBREF6, and word embedding clustering BIBREF7, are all examples of document expansion methods that must first be trained using either the original dataset or an external dataset from within the same domain. The expansion models may therefore only be used when there is a sufficiently large training set. Transfer learning was developed as a method of reducing the need for training data by adapting models trained mostly from external data BIBREF8. Transfer learning can be an effective method for short-text classification and requires little domain specific training data BIBREF9, BIBREF10, however it demands training a new model for every new classification task and does not offer a general solution to sparse data enrichment. Recently, multi-task language models have been developed and trained using ultra-large online datasets without being confined to any narrow applications BIBREF0, BIBREF1, BIBREF2. It is now possible to benefit from the information these models contain by adapting them to the task of text expansion and text classification. This paper is a novel approach which combines the advantages of document expansion, transfer learning, and multitask modeling. It expends documents with new and relevant keywords by using the BERT pre-trained learning model, thus taking advantage of transfer learning acquired during BERT's pretraining. It is also unsupervised and requires no task specific training, thus allowing the same model to be applied to many different tasks or domains. <<</Literature Review>>> <<<Procedures>>> <<<Dataset>>> The News Category Dataset BIBREF11 is a collection of headlines published by HuffPost BIBREF12 between 2012 and 2018, and was obtained online from Kaggle BIBREF13. The full dataset contains 200k news headlines with category labels, publication dates, and short text descriptions. For this analysis, a sample of roughly 33k headlines spanning 23 categories was used. Further analysis can be found in table SECREF12 in the appendix. <<</Dataset>>> <<<Word Generation>>> Words were generated using the BERT pre-trained model developed and trained by Google AI Language BIBREF0. BERT creates contextualized word embedding by passing a list of word tokens through 12 hidden transformer layers and generating encoded word vectors. To generate extended text, an original short-text document was passed to pre-trained BERT. At each transformer layer a new word embedding was formed and saved. BERT's vector decoder was then used to convert hidden word vectors to candidate words, the top three candidate words at each encoder layer were kept. Each input word produced 48 candidate words, however many were duplicates. Examples of generated words per layer can be found in table SECREF12 and SECREF12 in the appendix. The generated words were sorted based on frequency, duplicate words from the original input were removed, as were stop-words, punctuation, and incomplete words. The generated words were then appended to the original document to create extended pseudo documents, the extended document was limited to 120 words in order to normalize each feature set. Further analysis can be found in table SECREF12 in the appendix. figureThe proposed method uses the BERT pre-trained word embedding model to generate new words which are appended to the orignal text creating extended pseudo documents. <<</Word Generation>>> <<<Topic Evaluation>>> To test the proposed methods ability to generate unsupervised words, it was necessary to devise a method of measuring word relevance. Topic modeling was used based on the assumption that words found in the same topic are more relevant to one another then words from different topics BIBREF14. The complete 200k headline dataset BIBREF11 was modeled using a Naïve Bayes Algorithm BIBREF15 to create a word-category co-occurrence model. The top 200 most relevant words were then found for each category and used to create the topic table SECREF12. It was assumed that each category represented its own unique topic. The number of relevant output words as a function of the headline’s category label were measured, and can be found in figure SECREF4. The results demonstrate that the proposed method could correctly identify new words relevant to the input topic at a signal to noise ratio of 4 to 1. figureThe number of generated words within each topic was counted, topics which matched the original headline label were considered 'on target'. Results indicate that the unsupervised generation method produced far more words relating to the label category then to other topics. Tested on 7600 examples spanning 23 topics. <<</Topic Evaluation>>> <<<Binary and Multi-class Classification Experiments>>> Three datasets were formed by taking equal length samples from each category label. The new datastes are ‘Worldpost vs Crime’, ‘Politics vs Entertainment’, and ‘Sports vs Comedy’, a fourth multiclass dataset was formed by combining the three above sets. For each example three feature options were created by extending every headline by 0, 15 and 120 words. Before every run, a test set was removed and held aside. The remaining data was sampled based on the desired training size. Each feature option was one-hot encoded using a unique tfidf-vectorizer BIBREF16 and used to train a random-forest classifier BIBREF17 with 300-estimators for binary predictions and 900-estimators for multiclass. Random forest was chosen since it performs well on small datasets and is resistant to overfitting BIBREF18. Each feature option was evaluated against its corresponding test set. 10 runs were completed for each dataset. <<</Binary and Multi-class Classification Experiments>>> <<</Procedures>>> <<<Results and Analysis>>> <<<Evaluating word relevance>>> It is desirable to generate new words which are relevant to the target topics and increase predictive signal, while avoiding words which are irrelevant, add noise, and mislead predictions. The strategy, described in section SECREF4, was created to measure word relevance and quantify the unsupervised model performance. It can be seen from fig SECREF4 and SECREF12 in the appendix that the proposed expansion method is effective at generating words which relate to topics of the input sentence, even from very little data. From the context of just a single word, the method can generate 3 new relevant words, and can generate as many as 10 new relevant words from sentences which contain 5 topic related words SECREF12. While the method is susceptible to noise, producing on average 1 word related to each irrelevant topic, the number of correct predictions statistically exceed the noise. Furthermore, because the proposed method does not have any prior knowledge of its target topics, it remains completely domain agnostic, and can be applied generally for short text of any topic. <<<Binary Classification>>> Comparing the performance of extended pseudo documents on three separate binary classification datasets shows significant improvement from baseline in the sparse data region of 100 to 1000 training examples. The ‘Worldpost vs Crime’ dataset showed the most improvement as seen in figure SECREF1. Within the sparse data region the extended pseudo documents could achieve similar performance as original headlines with only half the data, and improve F1 score between 13.9% and 1.7% The ‘Comedy vs Sports’ dataset, seen in figure SECREF11, showed an average improvement of 2% within the sparse region. The ‘Politics vs Entertainment’ dataset, figure SECREF11, was unique. It is the only dataset for which a 15-word extended feature set surpassed the 120-words feature set. It demonstrates that the length of the extended pseudo documents can behave like a hyper parameter for certain datasets, and should be tuned according to the train-size. <<</Binary Classification>>> <<<Multiclass Classification>>> The Extended pseudo documents improved multiclass performance by 4.6% on average, in the region of 100 to 3000 training examples, as seen in figure SECREF11. The results indicate the effectiveness of the proposed method at suggesting relevant words within a narrow topic domain, even without any previous domain knowledge. In each instance it was found that the extended pseudo documents only improved performance on small training sizes. This demonstrates that while the extended pseudo docs are effective at generating artificial data, they also produce a lot of noise. Once the training size exceeds a certain threshold, it becomes no longer necessary to create additional data, and using extended documents simply adds noise to an otherwise well trained model. figureBinary Classification of 'Politics' or 'Entertainment' demonstrates that the number of added words can behave like a hyper paremeter and should be tuned based on training size. Tested on 1000 examples with 10-fold cross validation figureBinary Classification of 'Politics' vs 'Sports' has less improvement compared to other datasets which indicates that the proposed method, while constructed to be domain agnostic, shows better performance towards certain topics. Tested on 1000 examples with 10-fold cross validation. figureAdded Words improve Multiclass Classification between 1.5% and 13% in the range of 150 to 2000 training examples. Tests were conducted using equal size samples of Headlines categorized into 'World-Post', 'Crime', 'Politics', 'Entertainment', 'Sports' or 'Comedy'. A 900 Estimator Random Forest classifier was trained for each each data point, tested using 2000 examples, and averaged using 10-fold cross validation. 2 <<</Multiclass Classification>>> <<</Evaluating word relevance>>> <<</Results and Analysis>>> <<<Discussion>>> Generating new words based solely on ultra small prompts of 10 words or fewer is a major challenge. A short sentence is often characterized by a just a single keyword, and modeling topics from such little data is difficult. Any method of keyword generation that overly relies on the individual words will lack context and fail to add new information, while attempting to freely form new words without any prior domain knowledge is uncertain and leads to misleading suggestions. This method attempts to find balance between synonym and free-form word generation, by constraining words to fit the original sentence while still allowing for word-word and word-sentence interactions to create novel outputs. The word vectors must move through the transformer layers together and therefore maintain the same token order and semantic meaning, however they also receive new input from the surrounding words at each layer. The result, as can be seen from table SECREF12 and SECREF12 in the appendix, is that the first few transformer layers are mostly synonyms of the input sentence since the word vectors have not been greatly modified. The central transformer layers are relevant and novel, since they are still slightly constrained but also have been greatly influenced by sentence context. And the final transformer layers are mostly non-sensical, since they have been completely altered from their original state and lost their ability to retrieve real words. This method is unique since it avoids needing a prior dataset by using the information found within the weights of a general language model. Word embedding models, and BERT in particular, contain vast amounts of information collected through the course of their training. BERT Base for instance, has 110 Million parameters and was trained on both Wikipedea Corpus and BooksCorpus BIBREF0, a combined collection of over 3 Billion words. The full potential of such vastly trained general language models is still unfolding. This paper demonstrates that by carefully prompting and analysing these models, it is possible to extract new information from them, and extend short-text analysis beyond the limitations posed by word count. <<</Discussion>>> <<<Appendix>>> <<<Additional Tables and Figures>>> figureA Topic table, created from the category labels of the complete headline dataset, can be used to measure the relevance of generated words. An original headline was analyzed by counting the number of words which related to each topic. The generated words were then analyzed in the same way. The change in word count between input topics and output topics was measured and plotted as seen in figure SECREF12. figureBox plot of the number of generated words within a topic as a function of the number of input words within the same topic. Results indicate that additional related words can be generated by increasing the signal of the input prompt. Tested on 7600 examples spanning 23 topics. figureInformation regarding the original headlines, and generated words used to create extended pseudo headlines. figureTop 3 guesses for each token position at each later of a BERT pretrained embedding model. Given the input sentence '2 peoplpe injured in Indiana school shooting', the full list of generated words can be obtainedfrom the values in the table. figureTop 3 guesses for each token position at each later of a BERT pretrained embedding model. <<</Additional Tables and Figures>>> <<</Appendix>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nLiterature Review\nProcedures\nDataset\nWord Generation\nTopic Evaluation\nBinary and Multi-class Classification Experiments\nResults and Analysis\nEvaluating word relevance\nBinary Classification\nMulticlass Classification\nDiscussion\nAppendix\nAdditional Tables and Figures" ], "type": "outline" }
1910.08418
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Controlling Utterance Length in NMT-based Word Segmentation with Attention <<<Abstract>>> One of the basic tasks of computational language documentation (CLD) is to identify word boundaries in an unsegmented phonemic stream. While several unsupervised monolingual word segmentation algorithms exist in the literature, they are challenged in real-world CLD settings by the small amount of available data. A possible remedy is to take advantage of glosses or translation in a foreign, well-resourced, language, which often exist for such data. In this paper, we explore and compare ways to exploit neural machine translation models to perform unsupervised boundary detection with bilingual information, notably introducing a new loss function for jointly learning alignment and segmentation. We experiment with an actual under-resourced language, Mboshi, and show that these techniques can effectively control the output segmentation length. <<</Abstract>>> <<<Introduction>>> All over the world, languages are disappearing at an unprecedented rate, fostering the need for specific tools aimed to aid field linguists to collect, transcribe, analyze, and annotate endangered language data (e.g. BIBREF0, BIBREF1). A remarkable effort in this direction has improved the data collection procedures and tools BIBREF2, BIBREF3, enabling to collect corpora for an increasing number of endangered languages (e.g. BIBREF4). One of the basic tasks of computational language documentation (CLD) is to identify word or morpheme boundaries in an unsegmented phonemic or orthographic stream. Several unsupervised monolingual word segmentation algorithms exist in the literature, based, for instance, on information-theoretic BIBREF5, BIBREF6 or nonparametric Bayesian techniques BIBREF7, BIBREF8. These techniques are, however, challenged in real-world settings by the small amount of available data. A possible remedy is to take advantage of glosses or translations in a foreign, well-resourced language (WL), which often exist for such data, hoping that the bilingual context will provide additional cues to guide the segmentation algorithm. Such techniques have already been explored, for instance, in BIBREF9, BIBREF10 in the context of improving statistical alignment and translation models; and in BIBREF11, BIBREF12, BIBREF13 using Attentional Neural Machine Translation (NMT) models. In these latter studies, word segmentation is obtained by post-processing attention matrices, taking attention information as a noisy proxy to word alignment BIBREF14. In this paper, we explore ways to exploit neural machine translation models to perform unsupervised boundary detection with bilingual information. Our main contribution is a new loss function for jointly learning alignment and segmentation in neural translation models, allowing us to better control the length of utterances. Our experiments with an actual under-resourced language (UL), Mboshi BIBREF17, show that this technique outperforms our bilingual segmentation baseline. <<</Introduction>>> <<<Recurrent architectures in NMT>>> In this section, we briefly review the main concepts of recurrent architectures for machine translation introduced in BIBREF18, BIBREF19, BIBREF20. In our setting, the source and target sentences are always observed and we are mostly interested in the attention mechanism that is used to induce word segmentation. <<<RNN encoder-decoder>>> Sequence-to-sequence models transform a variable-length source sequence into a variable-length target output sequence. In our context, the source sequence is a sequence of words $w_1, \ldots , w_J$ and the target sequence is an unsegmented sequence of phonemes or characters $\omega _1, \ldots , \omega _I$. In the RNN encoder-decoder architecture, an encoder consisting of a RNN reads a sequence of word embeddings $e(w_1),\dots ,e(w_J)$ representing the source and produces a dense representation $c$ of this sentence in a low-dimensional vector space. Vector $c$ is then fed to an RNN decoder producing the output translation $\omega _1,\dots ,\omega _I$ sequentially. At each step of the input sequence, the encoder hidden states $h_j$ are computed as: In most cases, $\phi $ corresponds to a long short-term memory (LSTM) BIBREF24 unit or a gated recurrent unit (GRU) BIBREF25, and $h_J$ is used as the fixed-length context vector $c$ initializing the RNN decoder. On the target side, the decoder predicts each word $\omega _i$, given the context vector $c$ (in the simplest case, $h_J$, the last hidden state of the encoder) and the previously predicted words, using the probability distribution over the output vocabulary $V_T$: where $s_i$ is the hidden state of the decoder RNN and $g$ is a nonlinear function (e.g. a multi-layer perceptron with a softmax layer) computed by the output layer of the decoder. The hidden state $s_i$ is then updated according to: where $f$ again corresponds to the function computed by an LSTM or GRU cell. The encoder and the decoder are trained jointly to maximize the likelihood of the translation $\mathrm {\Omega }=\Omega _1, \dots , \Omega _I$ given the source sentence $\mathrm {w}=w_1,\dots ,w_J$. As reference target words are available during training, $\Omega _i$ (and the corresponding embedding) can be used instead of $\omega _i$ in Equations (DISPLAY_FORM5) and (DISPLAY_FORM6), a technique known as teacher forcing BIBREF26. <<</RNN encoder-decoder>>> <<<The attention mechanism>>> Encoding a variable-length source sentence in a fixed-length vector can lead to poor translation results with long sentences BIBREF19. To address this problem, BIBREF20 introduces an attention mechanism which provides a flexible source context to better inform the decoder's decisions. This means that the fixed context vector $c$ in Equations (DISPLAY_FORM5) and (DISPLAY_FORM6) is replaced with a position-dependent context $c_i$, defined as: where weights $\alpha _{ij}$ are computed by an attention model made of a multi-layer perceptron (MLP) followed by a softmax layer. Denoting $a$ the function computed by the MLP, then where $e_{ij}$ is known as the energy associated to $\alpha _{ij}$. Lines in the attention matrix $A = (\alpha _{ij})$ sum to 1, and weights $\alpha _{ij}$ can be interpreted as the probability that target word $\omega _i$ is aligned to source word $w_j$. BIBREF20 qualitatively investigated such soft alignments and concluded that their model can correctly align target words to relevant source words (see also BIBREF27, BIBREF28). Our segmentation method (Section SECREF3) relies on the assumption that the same holds when aligning characters or phonemes on the target side to source words. <<</The attention mechanism>>> <<</Recurrent architectures in NMT>>> <<<Attention-based word segmentation>>> Recall that our goal is to discover words in an unsegmented stream of target characters (or phonemes) in the under-resourced language. In this section, we first describe a baseline method inspired by the “align to segment” of BIBREF12, BIBREF13. We then propose two extensions providing the model with a signal relevant to the segmentation process, so as to move towards a joint learning of segmentation and alignment. <<<Align to segment>>> An attention matrix $A = (\alpha _{ij})$ can be interpreted as a soft alignment matrix between target and source units, where each cell $\alpha _{ij}$ corresponds to the probability for target symbols $\omega _i$ (here, a phone) to be aligned to the source word $w_j$ (cf. Equation (DISPLAY_FORM10)). In our context, where words need to be discovered on the target side, we follow BIBREF12, BIBREF13 and perform word segmentation as follows: train an attentional RNN encoder-decoder model with attention using teacher forcing (see Section SECREF2); force-decode the entire corpus and extract one attention matrix for each sentence pair. identify boundaries in the target sequences. For each target unit $\omega _i$ of the UL, we identify the source word $w_{a_i}$ to which it is most likely aligned : $\forall i, a_i = \operatornamewithlimits{argmax}_j \alpha _{ij}$. Given these alignment links, a word segmentation is computed by introducing a word boundary in the target whenever two adjacent units are not aligned with the same source word ($a_i \ne a_{i+1}$). Considering a (simulated) low-resource setting, and building on BIBREF14's work, BIBREF11 propose to smooth attentional alignments, either by post-processing attention matrices, or by flattening the softmax function in the attention model (see Equation (DISPLAY_FORM10)) with a temperature parameter $T$. This makes sense as the authors examine attentional alignments obtained while training from UL phonemes to WL words. But when translating from WL words to UL characters, this seems less useful: smoothing will encourage a character to align to many words. This technique is further explored by BIBREF29, who make the temperature parameter trainable and specific to each decoding step, so that the model can learn how to control the softness or sharpness of attention distributions, depending on the current word being decoded. <<</Align to segment>>> <<<Towards joint alignment and segmentation>>> One limitation in the approach described above lies in the absence of signal relative to segmentation during RNN training. Attempting to move towards a joint learning of alignment and segmentation, we propose here two extensions aimed at introducing constraints derived from our segmentation heuristic in the training process. <<<Word-length bias>>> Our first extension relies on the assumption that the length of aligned source and target words should correlate. Being in a relationship of mutual translation, aligned words are expected to have comparable frequencies and meaning, hence comparable lengths. This means that the longer a source word is, the more target units should be aligned to it. We implement this idea in the attention mechanism as a word-length bias, changing the computation of the context vector from Equation (DISPLAY_FORM9) to: where $\psi $ is a monotonically increasing function of the length $|w_j|$ of word $w_j$. This will encourage target units to attend more to longer source words. In practice, we choose $\psi $ to be the identity function and renormalize so as to ensure that lines still sum to 1 in the attention matrices. The context vectors $c_i$ are now computed with attention weights $\tilde{\alpha }_{ij}$ as: We finally derive the target segmentation from the attention matrix $A = (\tilde{\alpha }_{ij})$, following the method of Section SECREF11. <<</Word-length bias>>> <<<Introducing an auxiliary loss function>>> Another way to inject segmentation awareness inside our training procedure is to control the number of target words that will be produced during post-processing. The intuition here is that notwithstanding typological discrepancies, the target segmentation should yield a number of target words that is close to the length of the source. To this end, we complement the main loss function with an additional term $\mathcal {L}_\mathrm {AUX}$ defined as: The rationale behind this additional term is as follows: recall that a boundary is then inserted on the target side whenever two consecutive units are not aligned to the same source word. The dot product between consecutive lines in the attention matrix will be close to 1 if consecutive target units are aligned to the same source word, and closer to 0 if they are not. The summation thus quantifies the number of target units that will not be followed by a word boundary after segmentation, and $I - \sum _{i=1}^{I-1} \alpha _{i,*}^\top \alpha _{i+1, *}$ measures the number of word boundaries that are produced on the target side. Minimizing this auxiliary term should guide the model towards learning attention matrices resulting in target segmentations that have the same number of words on the source and target sides. Figure FIGREF25 illustrates the effect of our auxiliary loss on an example. Without auxiliary loss, the segmentation will yield, in this case, 8 target segments (Figure FIGREF25), while the attention learnt with auxiliary loss will yield 5 target segments (Figure FIGREF25); source sentence, on the other hand, has 4 tokens. <<</Introducing an auxiliary loss function>>> <<</Towards joint alignment and segmentation>>> <<</Attention-based word segmentation>>> <<<Experiments and discussion>>> In this section, we describe implementation details for our baseline segmentation system and for the extensions proposed in Section SECREF17, before presenting data and results. <<<Implementation details>>> Our baseline system is our own reimplementation of Bahdanau's encoder-decoder with attention in PyTorch BIBREF31. The last version of our code, which handles mini-batches efficiently, heavily borrows from Joost Basting's code. Source sentences include an end-of-sentence (EOS) symbol (corresponding to $w_J$ in our notation) and target sentences include both a beginning-of-sentence (BOS) and an EOS symbol. Padding of source and target sentences in mini-batches is required, as well as masking in the attention matrices and during loss computation. Our architecture follows BIBREF20 very closely with some minor changes. We use a single-layer bidirectional RNN BIBREF32 with GRU cells: these have been shown to perform similarly to LSTM-based RNNs BIBREF33, while computationally more efficient. We use 64-dimensional hidden states for the forward and backward RNNs, and for the embeddings, similarly to BIBREF12, BIBREF13. In Equation (DISPLAY_FORM4), $h_j$ corresponds to the concatenation of the forward and backward states for each step $j$ of the source sequence. The alignment MLP model computes function $a$ from Equation (DISPLAY_FORM10) as $a(s_{i-1}, h_j)=v_a^\top \tanh (W_a s_{i-1} + U_a h_j)$ – see Appendix A.1.2 in BIBREF20 – where $v_a$, $W_a$, and $U_a$ are weight matrices. For the computation of weights $\tilde{\alpha _{ij}}$ in the word-length bias extension (Equation (DISPLAY_FORM21)), we arbitrarily attribute a length of 1 to the EOS symbol on the source side. The decoder is initialized using the last backward state of the encoder and a non-linear function ($\tanh $) for state $s_0$. We use a single-layer GRU RNN; hidden states and output embeddings are 64-dimensional. In preliminary experiments, and as in BIBREF34, we observed better segmentations adopting a “generate first” approach during decoding, where we first generate the current target word, then update the current RNN state. Equations (DISPLAY_FORM5) and (DISPLAY_FORM6) are accordingly modified into: During training and forced decoding, the hidden state $s_i$ is thus updated using ground-truth embeddings $e(\Omega _{i})$. $\Omega _0$ is the BOS symbol. Our implementation of the output layer ($g$) consists of a MLP and a softmax. We train for 800 epochs on the whole corpus with Adam (the learning rate is 0.001). Parameters are updated after each mini-batch of 64 sentence pairs. A dropout layer BIBREF35 is applied to both source and target embedding layers, with a rate of 0.5. The weights in all linear layers are initialized with Glorot's normalized method (Equation (16) in BIBREF36) and bias vectors are initialized to 0. Embeddings are initialized with the normal distribution $\mathcal {N}(0, 0.1)$. Except for the bridge between the encoder and the decoder, the initialization of RNN weights is kept to PyTorch defaults. During training, we minimize the NLL loss $\mathcal {L}_\mathrm {NLL}$ (see Section SECREF3), adding optionally the auxiliary loss $\mathcal {L}_\mathrm {AUX}$ (Section SECREF22). When the auxiliary loss term is used, we schedule it to be integrated progressively so as to avoid degenerate solutions with coefficient $\lambda _\mathrm {AUX}(k)$ at epoch $k$ defined by: where $K$ is the total number of epochs and $W$ a wait parameter. The complete loss at epoch $k$ is thus $\mathcal {L}_\mathrm {NLL} + \lambda _\mathrm {AUX} \cdot \mathcal {L}_\mathrm {AUX}$. After trying values ranging from 100 to 700, we set $W$ to 200. We approximate the absolute value in Equation (DISPLAY_FORM24) by $|x| \triangleq \sqrt{x^2 + 0.001}$, in order to make the auxiliary loss function differentiable. <<</Implementation details>>> <<<Data and evaluation>>> Our experiments are performed on an actual endangered language, Mboshi (Bantu C25), a language spoken in Congo-Brazzaville, using the bilingual French-Mboshi 5K corpus of BIBREF17. On the Mboshi side, we consider alphabetic representation with no tonal information. On the French side,we simply consider the default segmentation into words. We denote the baseline segmentation system as base, the word-length bias extension as bias, and the auxiliary loss extensions as aux. We also report results for a variant of aux (aux+ratio), in which the auxiliary loss is computed with a factor corresponding to the true length ratio $r_\mathrm {MB/FR}$ between Mboshi and French averaged over the first 100 sentences of the corpus. In this variant, the auxiliary loss is computed as $\vert I - r_\mathrm {MB/FR} \cdot J - \sum _{i=1}^{I-1} \alpha _{i,*}^\top \alpha _{i+1, *} \vert $. We report segmentation performance using precision, recall, and F-measure on boundaries (BP, BR, BF), and tokens (WP, WR, WF). We also report the exact-match (X) metric which computes the proportion of correctly segmented utterances. Our main results are in Figure FIGREF47, where we report averaged scores over 10 runs. As a comparison with another bilingual method inspired by the “align to segment” approach, we also include the results obtained using the statistical models of BIBREF9, denoted Pisa, in Table TABREF46. <<</Data and evaluation>>> <<<Discussion>>> A first observation is that our baseline method base improves vastly over Pisa's results (by a margin of about 30% on boundary F-measure, BF). <<<Effects of the word-length bias>>> The integration of a word-bias in the attention mechanism seems detrimental to segmentation performance, and results obtained with bias are lower than those obtained with base, except for the sentence exact-match metric (X). To assess whether the introduction of word-length bias actually encourages target units to “attend more” to longer source word in bias, we compute the correlation between the length of source word and the quantity of attention these words receive (for each source position, we sum attention column-wise: $\sum _i \tilde{\alpha }_{ij}$). Results for all segmentation methods are in Table TABREF50. bias increases the correlation between word lengths and attention, but this correlation being already high for all methods (base, or aux and aux+ratio), our attempt to increase it proves here detrimental to segmentation. <<</Effects of the word-length bias>>> <<<Effects of the auxiliary loss>>> For boundary F-measures (BF) in Figure FIGREF47, aux performs similarly to base, but with a much higher precision, and degraded recall, indicating that the new method does not oversegment as much as base. More insight can be gained from various statistics on the automatically segmented data presented in Table TABREF52. The average token and sentence lengths for aux are closer to their ground-truth values (resp. 4.19 characters and 5.96 words). The global number of tokens produced is also brought closer to its reference. On token metrics, a similar effect is observed, but the trade-off between a lower recall and an increased precision is more favorable and yields more than 3 points in F-measure. These results are encouraging for documentation purposes, where precision is arguably a more valuable metric than recall in a semi-supervised segmentation scenario. They, however, rely on a crude heuristic that the source and target sides (here French and Mboshi) should have the same number of units, which are only valid for typologically related languages and not very accurate for our dataset. As Mboshi is more agglutinative than French (5.96 words per sentence on average in the Mboshi 5K, vs. 8.22 for French), we also consider the lightly supervised setting where the true length ratio is provided. This again turns out to be detrimental to performance, except for the boundary precision (BP) and the sentence exact-match (X). Note also that precision becomes stronger than recall for both boundary and token metrics, indicating under-segmentation. This is confirmed by an average token length that exceeds the ground-truth (and an average sentence length below the true value, see Table TABREF52). Here again, our control of the target length proves effective: compared to base, the auxiliary loss has the effect to decrease the average sentence length and move it closer to its observed value (5.96), yielding an increased precision, an effect that is amplified with aux+ratio. By tuning this ratio, it is expected that we could even get slightly better results. <<</Effects of the auxiliary loss>>> <<</Discussion>>> <<</Experiments and discussion>>> <<<Related work>>> The attention mechanism introduced by BIBREF20 has been further explored by many researchers. BIBREF37, for instance, compare a global to a local approach for attention, and examine several architectures to compute alignment weights $\alpha _{ij}$. BIBREF38 additionally propose a recurrent version of the attention mechanism, where a “dynamic memory” keeps track of the attention received by each source word, and demonstrate better translation results. A more general formulation of the attention mechanism can, lastly, be found in BIBREF39, where structural dependencies between source units can be modeled. With the goal of improving alignment quality, BIBREF40 computes a distance between attentions and word alignments learnt with the reparameterization of IBM Model 2 from BIBREF41; this distance is then added to the cost function during training. To improve alignments also, BIBREF14 introduce several refinements to the attention mechanism, in the form of structural biases common in word-based alignment models. In this work, the attention model is enriched with features able to control positional bias, fertility, or symmetry in the alignments, which leads to better translations for some language pairs, under low-resource conditions. More work seeking to improve alignment and translation quality can be found in BIBREF42, BIBREF43, BIBREF44, BIBREF45, BIBREF46, BIBREF47. Another important line of reseach related to work studies the relationship between segmentation and alignment quality: it is recognized that sub-lexical units such as BPE BIBREF48 help solve the unknown word problem; other notable works around these lines include BIBREF49 and BIBREF50. CLD has also attracted a growing interest in recent years. Most recent work includes speech-to-text translation BIBREF51, BIBREF52, speech transcription using bilingual supervision BIBREF53, both speech transcription and translation BIBREF54, or automatic phonemic transcription of tonal languages BIBREF55. <<</Related work>>> <<<Conclusion>>> In this paper, we explored neural segmentation methods extending the “align to segment” approach, and proposed extensions to move towards joint segmentation and alignment. This involved the introduction of a word-length bias in the attention mechanism and the design of an auxiliary loss. The latter approach yielded improvements over the baseline on all accounts, in particular for the precision metric. Our results, however, lag behind the best monolingual performance for this dataset (see e.g. BIBREF56). This might be due to the difficulty of computing valid alignments between phonemes and words in very limited data conditions, which remains very challenging, as also demonstrated by the results of Pisa. However, unlike monolingual methods, bilingual methods generate word alignments and their real benefit should be assessed with alignment based metrics. This is left for future work, as reference word alignments are not yet available for our data. Other extensions of this work will focus on ways to mitigate data sparsity with weak supervision information, either by using lists of frequent words or the presence of certain word boundaries on the target side or by using more sophisticated attention models in the spirit of BIBREF14 or BIBREF39. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nRecurrent architectures in NMT\nRNN encoder-decoder\nThe attention mechanism\nAttention-based word segmentation\nAlign to segment\nTowards joint alignment and segmentation\nWord-length bias\nIntroducing an auxiliary loss function\nExperiments and discussion\nImplementation details\nData and evaluation\nDiscussion\nEffects of the word-length bias\nEffects of the auxiliary loss\nRelated work\nConclusion" ], "type": "outline" }
1911.08673
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Global Greedy Dependency Parsing <<<Abstract>>> Most syntactic dependency parsing models may fall into one of two categories: transition- and graph-based models. The former models enjoy high inference efficiency with linear time complexity, but they rely on the stacking or re-ranking of partially-built parse trees to build a complete parse tree and are stuck with slower training for the necessity of dynamic oracle training. The latter, graph-based models, may boast better performance but are unfortunately marred by polynomial time inference. In this paper, we propose a novel parsing order objective, resulting in a novel dependency parsing model capable of both global (in sentence scope) feature extraction as in graph models and linear time inference as in transitional models. The proposed global greedy parser only uses two arc-building actions, left and right arcs, for projective parsing. When equipped with two extra non-projective arc-building actions, the proposed parser may also smoothly support non-projective parsing. Using multiple benchmark treebanks, including the Penn Treebank (PTB), the CoNLL-X treebanks, and the Universal Dependency Treebanks, we evaluate our parser and demonstrate that the proposed novel parser achieves good performance with faster training and decoding. <<</Abstract>>> <<<Introduction>>> Dependency parsing predicts the existence and type of linguistic dependency relations between words (as shown in Figure FIGREF1), which is a critical step in accomplishing deep natural language processing. Dependency parsing has been well developed BIBREF0, BIBREF1, and it generally relies on two types of parsing models: transition-based models and graph-based models. The former BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF4 traditionally apply local and greedy transition-based algorithms, while the latter BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12 apply globally optimized graph-based algorithms. A transition-based dependency parser processes the sentence word-by-word, commonly from left to right, and forms a dependency tree incrementally from the operations predicted. This method is advantageous in that inference on the projective dependency tree is linear in time complexity with respect to sentence length; however, it has several obvious disadvantages. Because the decision-making of each step is based on partially-built parse trees, special training methods are required, which results in slow training and error propagation, as well as weak long-distance dependence processing BIBREF13. Graph-based parsers learn scoring functions in one-shot and then perform an exhaustive search over the entire tree space for the highest-scoring tree. This improves the performances of the parsers, particularly the long-distance dependency processing, but these models usually have slow inference speed to encourage higher accuracy. The easy-first parsing approach BIBREF14, BIBREF15 was designed to integrate the advantages of graph-based parsers’ better-performing trees and transition-based parsers’ linear decoding complexity. By processing the input tokens in a stepwise easy-to-hard order, the algorithm makes use of structured information on partially-built parse trees. Because of the presence of rich, structured information, exhaustive inference is not an optimal solution - we can leverage this information to conduct inference much more quickly. As an alternative to exhaustive inference, easy-first chooses to use an approximated greedy search that only explores a tiny fraction of the search space. Compared to graph-based parsers, however, easy-first parsers have two apparent weaknesses: slower training and worse performance. According to our preliminary studies, with the current state-of-the-art systems, we must either sacrifice training complexity for decoding speed, or sacrifice decoding speed for higher accuracy. In this paper, we propose a novel Global (featuring) Greedy (inference) parsing architecture that achieves fast training, high decoding speed and good performance. With our approach, we use the one-shot arc scoring scheme as in the graph-based parser instead of the stepwise local scoring in transition-based. This is essential for achieving competitive performance, efficient training, and fast decoding. Since, to preserve linear time decoding, we chose a greedy algorithm, we introduce a parsing order scoring scheme to retain the decoding order in inference to achieve the highest accuracy possible. Just as with one-shot scoring in graph-based parsers, our proposed parser will perform arc-attachment scoring, parsing order scoring, and decoding simultaneously in an incremental, deterministic fashion just as transition-based parsers do. We evaluated our models on the common benchmark treebanks PTB and CTB, as well as on the multilingual CoNLL and the Universal Dependency treebanks. From the evaluation results on the benchmark treebanks, our proposed model gives significant improvements when compared to the baseline parser. In summary, our contributions are thus: $\bullet $ We integrate the arc scoring mechanism of graph-based parsers and the linear time complexity inference approach of transition parsing models, which, by replacing stepwise local feature scoring, significantly alleviates the drawbacks of these models, improving their moderate performance caused by error propagation and increasing their training speeds resulting from their lack of parallelism. $\bullet $ Empirical evaluations on benchmark and multilingual treebanks show that our method achieves state-of-the-art or comparable performance, indicating that our novel neural network architecture for dependency parsing is simple, effective, and efficient. $\bullet $ Our work shows that using neural networks’ excellent learning ability, we can simultaneously achieve both improved accuracy and speed. <<</Introduction>>> <<<The General Greedy Parsing>>> The global greedy parser will build its dependency trees in a stepwise manner without backtracking, which takes a general greedy decoding algorithm as in easy-first parsers. Using easy-first parsing's notation, we describe the decoding in our global greedy parsing. As both easy-first and global greedy parsing rely on a series of deterministic parsing actions in a general parsing order (unlike the fixed left-to-right order of standard transitional parsers), they need a specific data structure which consists of a list of unattached nodes (including their partial structures) referred to as “pending". At each step, the parser chooses a specific action $\hat{a}$ on position $i$ with the given arc score score($\cdot $), which is generated by an arc scorer in the parser. Given an intermediate state of parsing with pending $P=\lbrace p_0, p_1, p_2, \cdots , p_N\rbrace $, the attachment action is determined as follows: where $\mathcal {A}$ denotes the set of the allowed actions, and $i$ is the index of the node in pending. In addition to distinguishing the correct attachments from the incorrect ones, the arc scorer also assigns the highest scores to the easiest attachment decisions and lower scores to the harder decisions, thus determining the parsing order of an input sentence. For projective parsing, there are exactly two types of actions in the allowed action set: ATTACHLEFT($i$) and ATTACHRIGHT($i$). Let $p_i$ refer to $i$-th element in pending, then the allowed actions can be formally defined as follows: $\bullet $ ATTACHLEFT($i$): attaches $p_{i+1}$ to $p_i$ , which results in an arc ($p_i$, $p_{i+1}$) headed by $p_i$, and removes $p_{i+1}$ from pending. $\bullet $ ATTACHRIGHT($i$): attaches $p_i$ to $p_{i+1}$ , which results in an arc ($p_{i+1}$, $p_i$) headed by $p_{i+1}$, and removes $p_i$ from pending. <<</The General Greedy Parsing>>> <<<Global Greedy Parsing Model>>> Our proposed global greedy model contains three components: (1) an encoder that processes the input sentence and maps it into hidden states that lie in a low dimensional vector space $h_i$ and feeds it into a specific representation layer to strip away irrelevant information, (2) a modified scorer with a parsing order objective, and (3) a greedy inference module that generates the dependency tree. <<<Encoder>>> We employ a bi-directional LSTM-CNN architecture (BiLSTM-CNN) to encode the context in which convolutional neural networks (CNNs) learn character-level information $e_{char}$ to better handle out-of-vocabulary words. We then combine these words' character level embeddings with their word embedding $e_{word}$ and POS embedding $e_{pos}$ to create a context-independent representation, which we then feed into the BiLSTM to create word-level context-dependent representations. To further enhance the word-level representation, we leverage an external fixed representation $e_{lm}$ from pre-trained ELMo BIBREF16 or BERT BIBREF17 layer features. Finally, the encoder outputs a sequence of contextualized representations $h_i$. Because the contextualized representations will be used for several different purposes in the following scorers, it is necessary to specify a representation for each purpose. As shown in BIBREF18, applying a multi-layer perceptron (MLP) to the recurrent output states before the classifier strips away irrelevant information for the current decision, reducing both the dimensionality and the risk of model overfitting. Therefore, in order to distinguish the biaffine scorer's head and dependent representations and the parsing order scorer's representations, we add a separate contextualized representation layer with ReLU as its activation function for each syntax head $h^{head}_i \in H_{head}$ specific representations, dependent $h^{dep}_i \in H_{dep}$ specific representations, and parsing order $h^{order}_i \in H_{order}$: <<</Encoder>>> <<<Scorers>>> The traditional easy-first model relies on an incremental tree scoring process with stepwise loss backpropagation and sub-tree removal facilitated by local scoring, relying on the scorer and loss backpropagation to hopefully obtain the parsing order. Communicating the information from the scorer and the loss requires training a dynamic oracle, which exposes the model to the configurations resulting from erroneous decisions. This training process is done at the token level, not the sentence level, which unfortunately means incremental scoring prevents parallelized training and causes error propagation. We thus forego incremental local scoring, and, inspired by the design of graph-based parsing models, we instead choose to score all of the syntactic arc candidates in one-shot, which allows for global featuring at a sentence level; however, the introduction of one-shot scoring brings new problems. Since the graph-based method relies on a tree space search algorithm to find the tree with the highest score, the parsing order is not important at all. If we apply one-shot scoring to greedy parsing, we need a mechanism like a stack (as is used in transition-based parsing) to preserve the parsing order. Both transition-based and easy-first parsers build parse trees in an incremental style, which forces tree formation to follow an order starting from either the root and working towards the leaf nodes or vice versa. When a parser builds an arc that skips any layer, certain errors will exist that it will be impossible for the parent node to find. We thus implement a parsing order prediction module to learn a parsing order objective that outputs a parsing order score addition to the arc score to ensure that each pending node is attached to its parent only after all (or at least as many as possible) of its children have been collected. Our scorer consists of two parts: a biaffine scorer for one-shot scoring and a parsing order scorer for parsing order guiding. For the biaffine scorer, we adopt the biaffine attention mechanism BIBREF18 to score all possible head-dependent pairs: where $\textbf {W}_{arc}$, $\textbf {U}_{arc}$, $\textbf {V}_{arc}$, $\textbf {b}_{arc}$ are the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector, respectively. If we perform greedy inference only on the $s_{arc}$ directly, as in Figure FIGREF6, at step $i$, the decoder tests every pair in the pending list, and although the current score fits the correct tree structure for this example, because backtracking is not allowed in the deterministic greedy inference, according to the maximum score $s_{arc}$, the edge selected in step $i$+1 is “root"$\rightarrow $“come". This prevents the child nodes (“today" and “.") from finding the correct parent node in the subsequent step. Thus, the decoder is stuck with this error. This problem can be solved or mitigated by using a max spanning tree (MST) decoder or by adding beam search method to the inference, but neither guarantees maintaining linear time decoding. Therefore, we propose a new scorer for parsing order $s_{order}$. In the scoring stage, the parsing order score is passed to the decoder to guide it and prevent (as much as possible) resorting to erroneous choices. We formally define the parsing order score for decoding. To decode the nodes at the bottom of the syntax tree first, we define the the parsing order priority as the layer “level" or “position" in the tree. The biaffine output score is the probability of edge (dependency) existence, between 0 and 1, so the greater the probability, the more likely an edge is to exist. Thus, our parsing order scorer gives a layer score for a node, and then, we add this layer score to the biaffine score. Consequently, the relative score of the same layer can be kept unchanged, and the higher the score of a node in the bottom layer, the higher its decoding priority will be. We therefore define $s_{order}$ as: where $\textbf {W}_{order}$ and $\textbf {b}_{order} $ are parameters for the parsing order scorer. Finally, the one-shot arc score is: Similarly, we use the biaffine scorer for dependency label classification. We apply MLPs to the contextualized representations before using them in the label classifier as well. As with other graph-based models, the predicted tree at training time has each word as a dependent of its highest-scoring head (although at test time we ensure that the parse is a well-formed tree via the greedy parsing algorithm). <<</Scorers>>> <<<Training Objectives>>> To parse the syntax tree $y$ for a sentence $x$ with length $l$, the easy-first model relies on an action-by-action process performed on pending. In the training stage, the loss is accumulated once per step (action), and the model is updated by gradient backpropagation according to a preset frequency. This prohibits parallelism during model training lack between and within sentences. Therefore, the traditional easy-first model was trained to maximize following probability: where $\emph {pending}_i$ is the pending list state at step $i$. While for our proposed model, it uses the training method similar to that of graph-based models, in which the arc scores are all obtained in one-shot. Consequently, it does not rely on the pending list in the training phase and only uses the pending list to promote the process of linear parsing in the inference stage. Our model is trained to optimize the probability of the dependency tree $y$ when given a sentence $x$: $P_\theta (y|x)$, which can be factorized as: where $\theta $ represents learnable parameters, $l$ denotes the length of the processing sentence, and $y^{arc}_i$, $y^{rel}_i$ denote the highest-scoring head and dependency relation for node $x_i$. Thus, our model factors the distribution according to a bottom-up tree structure. Corresponding to multiple objectives, several parts compose the loss of our model. The overall training loss is the sum of three objectives: where the loss for arc prediction $\mathcal {L}^{arc}$ is the negative log-likelihood loss of the golden structure $y^{arc}$: the loss for relation prediction $\mathcal {L}^{rel}$ is implemented as the negative log-likelihood loss of the golden relation $y^{rel}$ with the golden structure $y^{arc}$, and the loss for parsing order prediction $\mathcal {L}^{order}$: Because the parsing order score of each layer in the tree increases by 1, we frame it as a classification problem and therefore add a multi-class classifier module as the order scorer. <<</Training Objectives>>> <<<Non-Projective Inference>>> For non-projective inference, we introduce two additional arc-building actions as follows. $\bullet $ NP-ATTACHLEFT($i$): attaches $p_{j}$ to $p_i$ where $j > i$, which builds an arc ($p_i$, $p_{j}$) headed by $p_i$, and removes $p_{j}$ from pending. $\bullet $ NP-ATTACHRIGHT($i$): attaches $p_{j}$ to $p_i$ where $j < i$ which builds an arc ($p_i$, $p_j$) headed by $p_i$, and removes $p_j$ from pending. If we use the two arc-building actions for non-projective dependency trees directly on $s_{final}$, the time complexity will become $O(n^3)$, so we need to modify this algorithm to accommodate the non-projective dependency trees. Specifically, we no longer use $s_{final}$ directly for greedy search but instead divide each decision into two steps. The first step is to use the order score $s_{order}$ to sort the pending list in descending order. Then, the second step is to find the edge with the largest arc score $s_{arc}$ for this node in the first position of the pending list. <<</Non-Projective Inference>>> <<<Time Complexity>>> The number of decoding steps to build a parse tree for a sentence is the same as its length, $n$. Combining this with the searching in the pending list (at each step, we need to find the highest-scoring pair in the pending list to attach. This has a runtime of $O(n)$. The time complexity of a full decoding is $O(n^2)$, which is equal to 1st-order non-projective graph-based parsing but more efficient than 1st-order projective parsing with $O(n^3)$ and other higher order graph parsing models. Compared with the current state-of-the-art transition-based parser STACKPTR BIBREF23, with the same decoding time complexity as ours, since our number of decoding takes $n$ steps while STACKPTR takes $2n-1$ steps for decoding and needs to compute the attention vector at each step, our model actually would be much faster than STACKPTR in decoding. For the non-projective inference in our model, the complexity is still $O(n^2)$. Since the order score and the arc score are two parts that do not affect each other, we can sort the order scores with time complexity of $O$($n$log$n$) and then iterate in this descending order. The iteration time complexity is $O(n)$ and determining the arc is also $O(n)$, so the overall time complexity is $O$($n$log$n$) $+$ $O(n^2)$, simplifying to $O(n^2)$. <<</Time Complexity>>> <<</Global Greedy Parsing Model>>> <<<Experiments>>> We evaluate our parsing model on the English Penn Treebank (PTB), the Chinese Penn Treebank (CTB), treebanks from two CoNLL shared tasks and the Universal Dependency (UD) Treebanks, using unlabeled attachment scores (UAS) and labeled attachment scores (LAS) as the metrics. Punctuation is ignored as in previous work BIBREF18. For English and Chinese, we use the projective inference, while for other languages, we use the non-projective one. <<<Treebanks>>> For English, we use the Stanford Dependency (SD 3.3.0) BIBREF37 conversion of the Penn Treebank BIBREF38, and follow the standard splitting convention for PTB, using sections 2-21 for training, section 22 as a development set and section 23 as a test set. We use the Stanford POS tagger BIBREF39 generate predicted POS tags. For Chinese, we adopt the splitting convention for CTB BIBREF40 described in BIBREF19. The dependencies are converted with the Penn2Malt converter. Gold segmentation and POS tags are used as in previous work BIBREF19. For the CoNLL Treebanks, we use the English treebank from the CoNLL-2008 shared task BIBREF41 and all 13 treebanks from the CoNLL-X shared task BIBREF42. The experimental settings are the same as BIBREF43. For UD Treebanks, following the selection of BIBREF23, we take 12 treebanks from UD version 2.1 (Nivre et al. 2017): Bulgarian (bg), Catalan (ca), Czech (cs), Dutch (nl), English (en), French (fr), German (de), Italian (it), Norwegian (no), Romanian (ro), Russian (ru) and Spanish (es). We adopt the standard training/dev/test splits and use the universal POS tags provided in each treebank for all the languages. <<</Treebanks>>> <<<Implementation Details>>> <<<Pre-trained Embeddings>>> We use the GloVe BIBREF44 trained on Wikipedia and Gigaword as external embeddings for English parsing. For other languages, we use the word vectors from 157 languages trained on Wikipedia and Crawl using fastText BIBREF45. We use the extracted BERT layer features to enhance the performance on CoNLL-X and UD treebanks. <<</Pre-trained Embeddings>>> <<<Hyperparameters>>> The character embeddings are 8-dimensional and randomly initialized. In the character CNN, the convolutions have a window size of 3 and consist of 50 filters. We use 3 stacked bidirectional LSTMs with 512-dimensional hidden states each. The outputs of the BiLSTM employ a 512-dimensional MLP layer for the arc scorer, a 128-dimensional MLP layer for the relation scorer, and a 128-dimensional MLP layer for the parsing order scorer, with all using ReLU as the activation function. Additionally, for parsing the order score, since considering it a classification problem over parse tree layers, we set its range to $[0, 1, ..., 32]$. <<</Hyperparameters>>> <<<Training>>> Parameter optimization is performed with the Adam optimizer with $\beta _1$ = $\beta _2$ = 0.9. We choose an initial learning rate of $\eta _0$ = 0.001. The learning rate $\eta $ is annealed by multiplying a fixed decay rate $\rho $ = 0.75 when parsing performance stops increasing on validation sets. To reduce the effects of an exploding gradient, we use a gradient clipping of 5.0. For the BiLSTM, we use recurrent dropout with a drop rate of 0.33 between hidden states and 0.33 between layers. Following BIBREF18, we also use embedding dropout with a rate of 0.33 on all word, character, and POS tag embeddings. <<</Training>>> <<</Implementation Details>>> <<<Main Results>>> We now compare our model with several other recently proposed parsers as shown in Table TABREF9. Our global greedy parser significantly outperforms the easy-first parser in BIBREF14 (HT-LSTM) on both PTB and CTB. Compared with other graph- and transition-based parsers, our model is also competitive with the state-of-the-art on PTB when considering the UAS metric. Compared to state-of-the-art parsers in transition and graph types, BIAF and STACKPTR, respectively, our model gives better or comparable results but with much faster training and decoding. Additionally, with the help of pre-trained language models, ELMo or BERT, our model can achieve even greater results. In order to explore the impact of the parsing order objective on the parsing performance, we replace the greedy inference with the traditional MST parsing algorithm (i.e., BIAF + parsing order objective), and the result is shown as “This work (MST)", giving slight performance improvement compared to the greedy inference, which shows globally optimized decoding of graph model still takes its advantage. Besides, compared to the standard training objective for graph model based parser, the performance improvement is slight but still shows the proposed parsing order objective is indeed helpful. <<</Main Results>>> <<<CoNLL Results>>> Table TABREF11 presents the results on 14 treebanks from the CoNLL shared tasks. Our model yields the best results on both UAS and LAS metrics of all languages except the Japanese. As for Japanese, our model gives unsatisfactory results because the original treebank was written in Roman phonetic characters instead of hiragana, which is used by both common Japanese writing and our pre-trained embeddings. Despite this, our model overall still gives 1.0% higher average UAS and LAS than the previous best parser, BIAF. <<</CoNLL Results>>> <<<UD Results>>> Following BIBREF23, we report results on the test sets of 12 different languages from the UD treebanks along with the current state-of-the-art: BIAF and STACKPTR. Although both BIAF and STACKPTR parsers have achieved relatively high parsing accuracies on the 12 languages and have all UAS higher than 90%, our model achieves state-of-the-art results in all languages for both UAS and LAS. Overall, our model reports more than 1.0% higher average UAS than STACKPTR and 0.3% higher than BIAF. <<</UD Results>>> <<<Runtime Analysis>>> In order to verify the time complexity analysis of our model, we measured the running time and speed of BIAF, STACKPTR and our model on PTB training and development set using the projective algorithm. The comparison in Table TABREF24 shows that in terms of convergence time, our model is basically the same speed as BIAF, while STACKPTR is much slower. For decoding, our model is the fastest, followed by BIAF. STACKPTR is unexpectedly the slowest. This is because the time cost of attention scoring in decoding is not negligible when compared with the processing speed and actually even accounts for a significant portion of the runtime. <<</Runtime Analysis>>> <<</Experiments>>> <<<Conclusion>>> This paper presents a new global greedy parser in which we enable greedy parsing inference compatible with the global arc scoring of graph-based parsing models instead of the local feature scoring of transitional parsing models. The proposed parser can perform projective parsing when only using two arc-building actions, and it also supports non-projective parsing when introducing two extra non-projective arc-building actions. Compared to graph-based and transition-based parsers, our parser achieves a better tradeoff between parsing accuracy and efficiency by taking advantages of both graph-based models' training methods and transition-based models' linear time decoding strategies. Experimental results on 28 treebanks show the effectiveness of our parser by achieving good performance on 27 treebanks, including the PTB and CTB benchmarks. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nThe General Greedy Parsing\nGlobal Greedy Parsing Model\nEncoder\nScorers\nTraining Objectives\nNon-Projective Inference\nTime Complexity\nExperiments\nTreebanks\nImplementation Details\nPre-trained Embeddings\nHyperparameters\nTraining\nMain Results\nCoNLL Results\nUD Results\nRuntime Analysis\nConclusion" ], "type": "outline" }
2001.08845
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Linguistic Fingerprints of Internet Censorship: the Case of SinaWeibo <<<Abstract>>> This paper studies how the linguistic components of blogposts collected from Sina Weibo, a Chinese microblogging platform, might affect the blogposts' likelihood of being censored. Our results go along with King et al. (2013)'s Collective Action Potential (CAP) theory, which states that a blogpost's potential of causing riot or assembly in real life is the key determinant of it getting censored. Although there is not a definitive measure of this construct, the linguistic features that we identify as discriminatory go along with the CAP theory. We build a classifier that significantly outperforms non-expert humans in predicting whether a blogpost will be censored. The crowdsourcing results suggest that while humans tend to see censored blogposts as more controversial and more likely to trigger action in real life than the uncensored counterparts, they in general cannot make a better guess than our model when it comes to `reading the mind' of the censors in deciding whether a blogpost should be censored. We do not claim that censorship is only determined by the linguistic features. There are many other factors contributing to censorship decisions. The focus of the present paper is on the linguistic form of blogposts. Our work suggests that it is possible to use linguistic properties of social media posts to automatically predict if they are going to be censored. <<</Abstract>>> <<<Introduction>>> In 2019, Freedom in the World, a yearly survey produced by Freedom House that measures the degree of civil liberties and political rights in every nation, recorded the 13th consecutive year of decline in global freedom. This decline spans across long-standing democracies such as USA as well as authoritarian regimes such as China and Russia. “Democracy is in retreat. The offensive against freedom of expression is being supercharged by a new and more effective form of digital authoritarianism." According to the report, China is now exporting its model of comprehensive internet censorship and surveillance around the world, offering trainings, seminars, and even study trips as well as advanced equipment. In this paper, we deal with a particular type of censorship – when a post gets removed from a social media platform semi-automatically based on its content. We are interested in exploring whether there are systematic linguistic differences between posts that get removed by censors from Sina Weibo, a Chinese microblogging platform, and the posts that remain on the website. Sina Weibo was launched in 2009 and became the most popular social media platform in China. Sina Weibo has over 431 million monthly active users. In cooperation with the ruling regime, Weibo sets strict control over the content published under its service BIBREF0. According to Zhu et al. zhu-etal:2013, Weibo uses a variety of strategies to target censorable posts, ranging from keyword list filtering to individual user monitoring. Among all posts that are eventually censored, nearly 30% of them are censored within 5–30 minutes, and nearly 90% within 24 hours BIBREF1. We hypothesize that the former are done automatically, while the latter are removed by human censors. Research shows that some of the censorship decisions are not necessarily driven by the criticism of the state BIBREF2, the presence of controversial topics BIBREF3, BIBREF4, or posts that describe negative events BIBREF5. Rather, censorship is triggered by other factors, such as for example, the collective action potential BIBREF2, i.e., censors target posts that stimulate collective action, such as riots and protests. The goal of this paper is to compare censored and uncensored posts that contain the same sensitive keywords and topics. Using the linguistic features extracted, a neural network model is built to explore whether censorship decision can be deduced from the linguistic characteristics of the posts. The contributions of this paper are: 1. We decipher a way to determine whether a blogpost on Weibo has been deleted by the author or censored by Weibo. 2. We develop a corpus of censored and uncensored Weibo blogposts that contain sensitive keyword(s). 3. We build a neural network classifier that predicts censorship significantly better than non-expert humans. 4. We find a set of linguistics features that contributes to the censorship prediction problem. 5. We indirectly test the construct of Collective Action Potential (CAP) proposed by King et al. king-etal:2013 through crowdsourcing experiments and find that the existence of CAP is more prevalent in censored blogposts than uncensored blogposts as judged by human annotators. <<</Introduction>>> <<<Previous Work>>> There have been significant efforts to develop strategies to detect and evade censorship. Most work, however, focuses on exploiting technological limitations with existing routing protocols BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. Research that pays more attention to linguistic properties of online censorship in the context of censorship evasion include, for example, Safaka et al. safaka-etal:2016 who apply linguistic steganography to circumvent censorship. Lee lee:2016 uses parodic satire to bypass censorship in China and claims that this stylistic device delays and often evades censorship. Hiruncharoenvate et al. hirun-etal:2015 show that the use of homophones of censored keywords on Sina Weibo could help extend the time a Weibo post could remain available online. All these methods rely on a significant amount of human effort to interpret and annotate texts to evaluate the likelihood of censorship, which might not be practical to carry out for common Internet users in real life. There has also been research that uses linguistic and content clues to detect censorship. Knockel et al. knockel-etal:2015 and Zhu et al. zhu-etal:2013 propose detection mechanisms to categorize censored content and automatically learn keywords that get censored. King et al. king-etal:2013 in turn study the relationship between political criticism and chance of censorship. They come to the conclusion that posts that have a Collective Action Potential get deleted by the censors even if they support the state. Bamman et al. bamman-etal:2012 uncover a set of politically sensitive keywords and find that the presence of some of them in a Weibo blogpost contribute to a higher chance of the post being censored. Ng et al. kei-nlp4if:2018 also target a set of topics that have been suggested to be sensitive, but unlike Bamman et al. bamman-etal:2012, they cover areas not limited to politics. Ng et al. kei-nlp4if:2018 investigate how the textual content as a whole might be relevant to censorship decisions when both the censored and uncensored blogposts include the same sensitive keyword(s). Our work is related to Ng et al. kei-nlp4if:2018 and Ng et al. ng-etal-2019-neural; however, we introduce a larger and more diverse dataset of censored posts; we experiment with a wider range of features and in fact show that not all the features reported in Ng et al. guarantee the best performance. We built a classifier that significantly outperforms Ng et al. We conduct a crowdsourcing experiment testing human judgments of controversy and censorship as well as indirectly testing the construct of collective action potential proposed by King et al. <<</Previous Work>>> <<<Tracking Censorship>>> Tracking censorship topics on Weibo is a challenging task due to the transient nature of censored posts and the scarcity of censored data from well-known sources such as FreeWeibo and WeiboScope. The most straightforward way to collect data from a social media platform is to make use of its API. However, Weibo imposes various restrictions on the use of its API such as restricted access to certain endpoints and restricted number of posts returned per request. Above all, the Weibo API does not provide any endpoint that allows easy and efficient collection of the target data (posts that contain sensitive keywords). Therefore, an alternative method is needed to track censorship for our purpose. <<</Tracking Censorship>>> <<<Datasets>>> <<<Using Zhu et al. (2003)'s Corpus>>> Zhu et al. zhu-etal:2013 collected over 2 million posts published by a set of around 3,500 sensitive users during a 2-month period in 2012. We extract around 20 thousand text-only posts using 64 keywords across 26 topics, which partially overlap with those included in the New Corpus (see below and in Table TABREF20). We filter all duplicates. Among the extracted posts, 930 (4.63%) are censored by Weibo as verified by Zhu et al. zhu-etal:2013 The extracted data from Zhu et al.zhu-etal:2013's are also used in building classification models. While it is possible to study the linguistic features in Zhu et al’s dataset without collecting new data, we created another corpus that targets `normal' users (Zhu et al. target `sensitive' users) and a different time period so that the results are not specific to a particular group of users and time. <<</Using Zhu et al. (2003)'s Corpus>>> <<<New Corpus>>> <<<Web Scraping>>> We develop a web scraper that continuously collects and tracks data that contain sensitive keywords on the front-end. The scraper's target interface displays 20 to 24 posts that contain a certain search key term(s), resembling a search engine's result page. We call this interface the Topic Timeline since the posts all contain the same keyword(s) and are displayed in reverse chronological order. The Weibo API does not provide any endpoint that returns the same set of data appeared on the Topic Timeline. Through a series of trial-and-errors to avoid CAPTCHAs that interrupt the data collection process, we found an optimal scraping frequency of querying the Topic Timeline every 5 to 10 minutes using 17 search terms (see Appendix) across 10 topics (see Table TABREF13) for a period of 4 months (August 29, 2018 to December 29, 2018). In each query, all relevant posts and their meta-data are saved to our database. We save posts that contain texts only (i.e. posts that do not contain images, hyperlinks, re-blogged content etc.) and filter out duplicates. <<</Web Scraping>>> <<<Decoding Censorship>>> According to Zhu et al. zhu-etal:2013, the unique ID of a Weibo post is the key to distinguish whether a post has been censored by Weibo or has been instead removed by the authors themselves. If a post has been censored by Weibo, querying its unique ID through the API returns an error message of “permission denied" (system-deleted), whereas a user-removed post returns an error message of “the post does not exist" (user-deleted). However, since the Topic Timeline (the data source of our web scraper) can be accessed only on the front-end (i.e. there is no API endpoint associated with it), we rely on both the front-end and the API to identify system- and user-deleted posts. It is not possible to distinguish the two types of deletion by directly querying the unique ID of all scraped posts because, through empirical experimentation, uncensored posts and censored (system-deleted) posts both return the same error message – “permission denied"). Therefore, we need to first check if a post still exists on the front-end, and then send an API request using the unique ID of the post that no longer exists to determine whether it has been deleted by the system or the user. The steps to identify censorship status of each post are illustrated in Figure FIGREF12. First, we check whether a scraped post is still available through visiting the user interface of each post. This is carried out automatically in a headless browser 2 days after a post is published. If a post has been removed (either by system or by user), the headless browser is redirected to an interface that says “the page doesn't exist"; otherwise, the browser brings us to the original interface that displays the post content. Next, after 14 days, we use the same methods in step 1 to check the posts' status again. This step allows our dataset to include posts that have been removed at a later stage. Finally, we send a follow-up API query using the unique ID of posts that no longer exist on the browser in step 1 and step 2 to determine censorship status using the same decoding techniques proposed by Zhu et al. as described above zhu-etal:2013. Altogether, around 41 thousand posts are collected, in which 952 posts (2.28%) are censored by Weibo. In our ongoing work, we are comparing the accuracy of the classifier on posts that are automatically removed vs. those removed by humans. The results will be reported in the future publications. We would like to emphasize that while the data collection methods could be seen as recreating a keyword search, the scraping pipeline also deciphers the logic in discovering censorship on Weibo. <<</Decoding Censorship>>> <<</New Corpus>>> <<<Sample Data>>> Figure FIGREF15 shows several examples selected randomly from our dataset. Each pair of censored and uncensored posts contains the same sensitive keyword. <<</Sample Data>>> <<</Datasets>>> <<<Crowdsourcing Experiment>>> A balanced corpus is created. The uncensored posts of each dataset are randomly sampled to match with the number of their censored counterparts (see Table TABREF13 and Table TABREF20). We select randomly a subset of the data collected by the web scraper to construct surveys for crowdsourcing experiment. The surveys ask participants three questions (see Figure FIGREF16). Sample questions are included in Appendix. Question 1 explores how humans perform on the task of censorship classification; question 2 explores whether a blogpost is controversial; question 3 serves as a way to explore in our data the concept of Collective Action Potential (CAP) suggested by King et al.king-etal:2013. According to King et al. king-etal:2013, Collective Action Potential is the potential to cause collective action such as protest or organized crowd formation outside the Internet. Participants can respond either Yes or No to the 3 questions above. A total of 800 blogposts (400 censored and 400 uncensored) are presented to 37 different participants through a crowdsourcing platform Witmart in 8 batches (100 blogposts per batch). Each blogpost is annotated by 6 to 12 participants. The purpose of this paper is to shed light on the “knowledge gap” between censors and normal weibo users about censorable content. We believe weibo users are aware of potential censorable content but are not “trained” enough to avoid or identify them. The results are summarized in Table TABREF19. The annotation results are intuitive – participants tend to see censored blogposts as more controversial and more likely to trigger action in real life than the uncensored counterpart. We obtain a Fleiss's kappa score for each question to study the inter-rater agreement. Since the number and identity of participants of each batch of the survey are different, we obtain an average Fleiss' kappa from the results of each batch. The Fleiss' kappa of questions 1 to 3 are 0.13, 0.07, and 0.03 respectively, which all fall under the category of slight agreement. We hypothesize that since all blogposts contain sensitive keyword(s), the annotators choose to label a fair amount of uncensored blogposts as controversial, and even as likely to be censored or cause action in real life. This might also be the reason of the low agreement scores – the sensitive keywords might be the cause of divided opinions. Regarding the result of censorship prediction, 23.83% of censored blogposts are correctly annotated as censored, while 83.59% of uncensored blogposts are correctly annotated as uncensored. This result suggests that participants tend to predict a blogpost to survive censorship on Weibo, despite the fact that they can see the presence of controversial element(s) in a blogpost as suggested by the annotation results of question 2. This suggests that detecting censorable content is a non-trivial task and humans do not have a good intuition (unless specifically trained, perhaps) what material is going to be censored. It might be true that there is some level of subjectivity form human censors. We believe there are commonalities among censored blogposts that pass through the “subjectivity filters” and such commonalities could be the linguistic features that contribute to our experiment results (see sections SECREF6 and SECREF7). <<</Crowdsourcing Experiment>>> <<<Feature Extraction>>> To build an automatic classifier, we first extract features from both our scraped data and Zhu et al.'s dataset. While the datasets we use are different from that of Ng et al. kei-nlp4if:2018 and Ng et al. ng-etal-2019-neural, some of the features we extract are similar to theirs. We include CRIE features (see below) and the number of followers feature that are not extracted in Ng et al. kei-nlp4if:2018's work. <<<Linguistic Features>>> We extract 5 sets of linguistic features from both datasets (see below) – the LIWC features, the CRIE features, the sentiment features, the semantic features, and word embeddings. We are interested in the LIWC and CRIE features because they are purely linguistic, which aligns with the objective of our study. Also, some of the LIWC features extracted from Ng et al. ng2018detecting's data have shown to be useful in classifying censored and uncensored tweets. <<<LIWC features>>> The English Linguistic Inquiry and Word Count (LIWC) BIBREF11, BIBREF12 is a program that analyzes text on a word-by-word basis, calculating percentage of words that match each language dimension, e.g., pronouns, function words, social processes, cognitive processes, drives, informal language use etc. Its lexicon consists of approximately 6400 words divided into categories belong to different linguistic dimensions and psychological processes. LIWC builds on previous research establishing strong links between linguistic patterns and personality/psychological state. We use a version of LIWC developed for Chinese by Huang et al. huang-etal:2012 to extract the frequency of word categories. Altogether we extract 95 features from LIWC. One important feature of the LIWC lexicon is that categories form a tree structure hierarchy. Some features subsume others. <<</LIWC features>>> <<<Sentiment features>>> We use BaiduAI to obtain a set of sentiment scores for each post. BaiduAI's sentiment analyzer is built using deep learning techniques based on data found on Baidu, one of the most popular search engines and encyclopedias in mainland China. It outputs a positive sentiment score and a negative sentiment score which sum to 1. <<</Sentiment features>>> <<<CRIE features>>> We use the Chinese Readability Index Explorer (CRIE) BIBREF13, a text analysis tool developed for measuring the readability of a Chinese text based on the its linguistic components. Its internal dictionaries and lexical information are developed based on dominant corpora such as the Sinica Tree Bank. CRIE outputs 50 linguistic features (see Appendix), such as word, syntax, semantics, and cohesion in each text or produce an aggregated result for a batch of texts. CRIE can train and categorize texts based on their readability levels. We use the textual-features analysis for our data and derive readability scores for each post in our data. These scores are mainly based on descriptive statistics. <<</CRIE features>>> <<<Semantic features>>> We use the Chinese Thesaurus developed by Mei mei:1984 and extended by HIT-SCIR to extract semantic features. The structure of this semantic dictionary is similar to WordNet, where words are divided into 12 semantic classes and each word can belong to one or more classes. It can be roughly compared to the concept of word senses. We derive a semantic ambiguity feature by dividing the number of words in each post by the number of semantic classes in it. <<</Semantic features>>> <<<Frequency & readability>>> We compute the average frequency of characters and words in each post using Da da:2004's work and Aihanyu's CNCorpus respectively. For words with a frequency lower than 50 in the reference corpus, we count it as 0.0001%. It is intuitive to think that a text with less semantic variety and more common words and characters is relatively easier to read and understand. We derive a Readability feature by taking the mean of character frequency, word frequency and word count to semantic classes described above. It is assumed that the lower the mean of the 3 components, the less readable a text is. In fact, these 3 components are part of Sung et al. sung-et-al:2015's readability metric for native speakers on the word level and semantic level. <<</Frequency & readability>>> <<<Word embeddings>>> Word vectors are trained using the word2vec tool BIBREF14, BIBREF15 on 300,000 of the latest Chinese articles on Wikipedia. A 200-dimensional vector is computed for each word of each blogpost. The vector average of each blogpost is the sum of word vectors divided by the number of vectors. The 200-dimensional vector average are used as features for classification. <<</Word embeddings>>> <<</Linguistic Features>>> <<<Non-linguistic Features>>> <<<Followers>>> The number of followers of the author of each post is recorded and used as a feature for classification. <<</Followers>>> <<</Non-linguistic Features>>> <<</Feature Extraction>>> <<<Classification>>> Features extracted from the balanced datasets (See Table 1 and Table 3) are used for classifications. Although the amount of uncensored blogposts significantly outnumber censored in real-life, such unbalanced corpus might be more suitable for anomaly detection. All numeric values of the features have been standardized before classification. We use a multilayer perceptron (MLP) classifier to classify instances into censored and uncensored. A number of classification experiments using different combinations of features are carried out. Best performances are achieved using the combination of CRIE, sentiment, semantic, frequency, readability and follower features (i.e. all features but LIWC and word embeddings) (see Table TABREF36). The feature selection is performed using random sampling. As a result 77 features are selected that perform consistently well across the datasets. We call these features the best features set. (see https://msuweb.montclair.edu/~feldmana/publications/aaai20_appendix.pdf for the full list of features). We vary the number of epochs and hidden layers. The rest of the parameters are set to default – learning rate of 0.3, momentum of 0.2, batch size of 100, validation threshold of 20. Classification experiments are performed on 1) both datasets 2) scraped data only 3) Zhu et al.'s data only. Each experiment is validated with 10-fold cross validation. We report the accuracy of each model in Table TABREF36. It is worth mentioning that using the LIWC features only, or the CRIE features only, or the word embeddings only, or all features excluding the CRIE features, or all features except the LIWC and CRIE features all result in poor performance of below 60%. Besides MLP, we also use the same sets of features to train classifiers using Naive Bayes, Logistic, and Support Vector Machine. However, the performances are all below 65%. <<</Classification>>> <<<Discussion and Conclusion>>> Our best results are over 30% higher than the baseline and about 60% higher than the human baseline obtained through crowdsourcing, which shows that our classifier has a greater censorship predictive ability compared to human judgments. The classification on both datasets together tends to give higher accuracy using at least 3 hidden layers. However, the performance does not improve when adding additional layers (other parameters being the same). Since the two datasets were collected differently and contain different topics, combining them together results in a richer dataset that requires more hidden layers to train a better model. It is worth noting that classifying both datasets using the best features set decreases the accuracy, while using all features but LIWC improves the classification performance. The reason for this behavior could be an existence of consistent differences in the LIWC features between the datasets. Since the LIWC features in the best features set (see Appendix https://msuweb.montclair.edu/~feldmana/publications/aaai20_appendix.pdf) consist of mostly word categories of different genres of vocabulary (i.e. grammar and style agnostic), it might suggest that the two datasets use vocabularies differently. Yet, the high performance obtained excluding the LIWC features shows that the key to distinguishing between censored and uncensored posts seems to be the features related to writing style, readability, sentiment, and semantic complexity of a text. Figure FIGREF38 shows two blogposts annotated by CRIE with number of verbs and number of first person pronoun features. To narrow down on what might be the best features that contribute to distinguishing censored and uncensored posts, we compare the mean of each feature of the two classes (see Figure FIGREF37). The 6 features distinguish censored from uncensored are: 1. negative sentiment 2. average number of idioms in each sentence 3. number of content word categories 4. number of idioms 5. number of complex semantic categories 6. verbs On the other hand, the 4 features that distinguish uncensored from censored are: 1. positive sentiment 2. words related to leisure 3. words related to reward 4. words related to money This might suggest that the censored posts generally convey more negative sentiment and are more idiomatic and semantically complex in terms of word usage. According to King et al. king-etal:2013, Collective Action Potential, which is related to a blogpost's potential of causing riot or assembly in real-life, is the key determinant of a blogpost getting censored. Although there is not a definitive measure of this construct, it is intuitive to relate a higher average use of verbs to a post that calls for action. On the other hand, the uncensored posts might be in general more positive in nature (positive sentiment) and include more content that talks about neutral matters (money, leisure, reward). We further explore how the use of verbs might possibly affect censorship by studying the types of verbs used in censored and uncensored blogposts. We extracted verbs from all blogposts by using the Jieba Part-of-speech tagger . We then used the Chinese Thesaurus described in Section SECREF21 to categorize the verbs into 5 classes: Actions, Psychology, Human activities, States and phenomena, and Relations. However, no significant differences have been found across censored and uncensored blogposts. A further analysis on verbs in terms of their relationship with actions and arousal can be a part of future work. Since the focus of this paper is to study the linguistic content of blogposts, rather than rate of censorship, we did not employ technical methods to differentiate blogposts that have different survival rates. Future work could be done to investigate any differences between blogposts that get censored at different rates. In our ongoing work, we are comparing the accuracy of the classifier on posts that are automatically removed vs. those removed by humans. The results will be reported in the future publications. To conclude, our work shows that there are linguistic fingerprints of censorship, and it is possible to use linguistic properties of a social media post to automatically predict if it is going to be censored. It will be interesting to explore if the same linguistic features can be used to predict censorship on other social media platforms and in other languages. <<</Discussion and Conclusion>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nPrevious Work\nTracking Censorship\nDatasets\nUsing Zhu et al. (2003)'s Corpus\nNew Corpus\nWeb Scraping\nDecoding Censorship\nSample Data\nCrowdsourcing Experiment\nFeature Extraction\nLinguistic Features\nLIWC features\nSentiment features\nCRIE features\nSemantic features\nFrequency & readability\nWord embeddings\nNon-linguistic Features\nFollowers\nClassification\nDiscussion and Conclusion" ], "type": "outline" }
1909.05246
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Self-Attentional Models Application in Task-Oriented Dialogue Generation Systems <<<Abstract>>> Self-attentional models are a new paradigm for sequence modelling tasks which differ from common sequence modelling methods, such as recurrence-based and convolution-based sequence learning, in the way that their architecture is only based on the attention mechanism. Self-attentional models have been used in the creation of the state-of-the-art models in many NLP tasks such as neural machine translation, but their usage has not been explored for the task of training end-to-end task-oriented dialogue generation systems yet. In this study, we apply these models on the three different datasets for training task-oriented chatbots. Our finding shows that self-attentional models can be exploited to create end-to-end task-oriented chatbots which not only achieve higher evaluation scores compared to recurrence-based models, but also do so more efficiently. <<</Abstract>>> <<<Introduction>>> Task-oriented chatbots are a type of dialogue generation system which tries to help the users accomplish specific tasks, such as booking a restaurant table or buying movie tickets, in a continuous and uninterrupted conversational interface and usually in as few steps as possible. The development of such systems falls into the Conversational AI domain which is the science of developing agents which are able to communicate with humans in a natural way BIBREF0. Digital assistants such as Apple's Siri, Google Assistant, Amazon Alexa, and Alibaba's AliMe are examples of successful chatbots developed by giant companies to engage with their customers. There are mainly two different ways to create a task-oriented chatbot which are either using set of hand-crafted and carefully-designed rules or use corpus-based method in which the chatbot can be trained with a relatively large corpus of conversational data. Given the abundance of dialogue data, the latter method seems to be a better and a more general approach for developing task-oriented chatbots. The corpus-based method also falls into two main chatbot design architectures which are pipelined and end-to-end architectures BIBREF1. End-to-end chatbots are usually neural networks based BIBREF2, BIBREF3, BIBREF4, BIBREF5 and thus can be adapted to new domains by training on relevant dialogue datasets for that specific domain. Furthermore, all sequence modelling methods can also be used in training end-to-end task-oriented chatbots. A sequence modelling method receives a sequence as input and predicts another sequence as output. For example in the case of machine translation the input could be a sequence of words in a given language and the output would be a sentence in a second language. In a dialogue system, an utterance is the input and the predicted sequence of words would be the corresponding response. Self-attentional models are a new paradigm for sequence modelling tasks which differ from common sequence modelling methods, such as recurrence-based and convolution-based sequence learning, in the way that their architecture is only based on the attention mechanism. The Transformer BIBREF6 and Universal Transformer BIBREF7 models are the first models that entirely rely on the self-attention mechanism for both encoder and decoder, and that is why they are also referred to as a self-attentional models. The Transformer models has produced state-of-the-art results in the task neural machine translation BIBREF6 and this encouraged us to further investigate this model for the task of training task-oriented chatbots. While in the Transformer model there is no recurrence, it turns out that the recurrence used in RNN models is essential for some tasks in NLP including language understanding tasks and thus the Transformer fails to generalize in those tasks BIBREF7. We also investigate the usage of the Universal Transformer for this task to see how it compares to the Transformer model. We focus on self-attentional sequence modelling for this study and intend to provide an answer for one specific question which is: How effective are self-attentional models for training end-to-end task-oriented chatbots? Our contribution in this study is as follows: We train end-to-end task-oriented chatbots using both self-attentional models and common recurrence-based models used in sequence modelling tasks and compare and analyze the results using different evaluation metrics on three different datasets. We provide insight into how effective are self-attentional models for this task and benchmark the time performance of these models against the recurrence-based sequence modelling methods. We try to quantify the effectiveness of self-attention mechanism in self-attentional models and compare its effect to recurrence-based models for the task of training end-to-end task-oriented chatbots. <<</Introduction>>> <<<Related Work>>> <<<Task-Oriented Chatbots Architectures>>> End-to-end architectures are among the most used architectures for research in the field of conversational AI. The advantage of using an end-to-end architecture is that one does not need to explicitly train different components for language understanding and dialogue management and then concatenate them together. Network-based end-to-end task-oriented chatbots as in BIBREF4, BIBREF8 try to model the learning task as a policy learning method in which the model learns to output a proper response given the current state of the dialogue. As discussed before, all encoder-decoder sequence modelling methods can be used for training end-to-end chatbots. Eric and Manning eric2017copy use the copy mechanism augmentation on simple recurrent neural sequence modelling and achieve good results in training end-to-end task-oriented chatbots BIBREF9. Another popular method for training chatbots is based on memory networks. Memory networks augment the neural networks with task-specific memories which the model can learn to read and write. Memory networks have been used in BIBREF8 for training task-oriented agents in which they store dialogue context in the memory module, and then the model uses it to select a system response (also stored in the memory module) from a set of candidates. A variation of Key-value memory networks BIBREF10 has been used in BIBREF11 for the training task-oriented chatbots which stores the knowledge base in the form of triplets (which is (subject,relation,object) such as (yoga,time,3pm)) in the key-value memory network and then the model tries to select the most relevant entity from the memory and create a relevant response. This approach makes the interaction with the knowledge base smoother compared to other models. Another approach for training end-to-end task-oriented dialogue systems tries to model the task-oriented dialogue generation in a reinforcement learning approach in which the current state of the conversation is passed to some sequence learning network, and this network decides the action which the chatbot should act upon. End-to-end LSTM based model BIBREF12, and the Hybrid Code Networks BIBREF13 can use both supervised and reinforcement learning approaches for training task-oriented chatbots. <<</Task-Oriented Chatbots Architectures>>> <<<Sequence Modelling Methods>>> Sequence modelling methods usually fall into recurrence-based, convolution-based, and self-attentional-based methods. In recurrence-based sequence modeling, the words are fed into the model in a sequential way, and the model learns the dependencies between the tokens given the context from the past (and the future in case of bidirectional Recurrent Neural Networks (RNNs)) BIBREF14. RNNs and their variations such as Long Short-term Memory (LSTM) BIBREF15, and Gated Recurrent Units (GRU) BIBREF16 are the most widely used recurrence-based models used in sequence modelling tasks. Convolution-based sequence modelling methods rely on Convolutional Neural Networks (CNN) BIBREF17 which are mostly used for vision tasks but can also be used for handling sequential data. In CNN-based sequence modelling, multiple CNN layers are stacked on top of each other to give the model the ability to learn long-range dependencies. The stacking of layers in CNNs for sequence modeling allows the model to grow its receptive field, or in other words context size, and thus can model complex dependencies between different sections of the input sequence BIBREF18, BIBREF19. WaveNet van2016wavenet, used in audio synthesis, and ByteNet kalchbrenner2016neural, used in machine translation tasks, are examples of models trained using convolution-based sequence modelling. <<</Sequence Modelling Methods>>> <<</Related Work>>> <<<Models>>> We compare the most commonly used recurrence-based models for sequence modelling and contrast them with Transformer and Universal Transformer models. The models that we train are: <<<LSTM and Bi-Directional LSTM>>> Long Short-term Memory (LSTM) networks are a special kind of RNN networks which can learn long-term dependencies BIBREF15. RNN models suffer from the vanishing gradient problem BIBREF20 which makes it hard for RNN models to learn long-term dependencies. The LSTM model tackles this problem by defining a gating mechanism which introduces input, output and forget gates, and the model has the ability to decide how much of the previous information it needs to keep and how much of the new information it needs to integrate and thus this mechanism helps the model keep track of long-term dependencies. Bi-directional LSTMs BIBREF21 are a variation of LSTMs which proved to give better results for some NLP tasks BIBREF22. The idea behind a Bi-directional LSTM is to give the network (while training) the ability to not only look at past tokens, like LSTM does, but to future tokens, so the model has access to information both form the past and future. In the case of a task-oriented dialogue generation systems, in some cases, the information needed so that the model learns the dependencies between the tokens, comes from the tokens that are ahead of the current index, and if the model is able to take future tokens into accounts it can learn more efficiently. <<</LSTM and Bi-Directional LSTM>>> <<<Transformer>>> As discussed before, Transformer is the first model that entirely relies on the self-attention mechanism for both the encoder and the decoder. The Transformer uses the self-attention mechanism to learn a representation of a sentence by relating different positions of that sentence. Like many of the sequence modelling methods, Transformer follows the encoder-decoder architecture in which the input is given to the encoder and the results of the encoder is passed to the decoder to create the output sequence. The difference between Transformer (which is a self-attentional model) and other sequence models (such as recurrence-based and convolution-based) is that the encoder and decoder architecture is only based on the self-attention mechanism. The Transformer also uses multi-head attention which intends to give the model the ability to look at different representations of the different positions of both the input (encoder self-attention), output (decoder self-attention) and also between input and output (encoder-decoder attention) BIBREF6. It has been used in a variety of NLP tasks such as mathematical language understanding [110], language modeling BIBREF23, machine translation BIBREF6, question answering BIBREF24, and text summarization BIBREF25. <<</Transformer>>> <<<Universal Transformer>>> The Universal Transformer model is an encoder-decoder-based sequence-to-sequence model which applies recurrence to the representation of each of the positions of the input and output sequences. The main difference between the RNN recurrence and the Universal Transformer recurrence is that the recurrence used in the Universal Transformer is applied on consecutive representation vectors of each token in the sequence (i.e., over depth) whereas in the RNN models this recurrence is applied on positions of the tokens in the sequence. A variation of the Universal Transformer, called Adaptive Universal Transformer, applies the Adaptive Computation Time (ACT) BIBREF26 technique on the Universal Transformer model which makes the model train faster since it saves computation time and also in some cases can increase the model accuracy. The ACT allows the Universal Transformer model to use different recurrence time steps for different tokens. We know, based on reported evidence that transformers are potent in NLP tasks like translation and question answering. Our aim is to assess the applicability and effectiveness of transformers and universal-transformers in the domain of task-oriented conversational agents. In the next section, we report on experiments to investigate the usage of self-attentional models performance against the aforementioned models for the task of training end-to-end task-oriented chatbots. <<</Universal Transformer>>> <<</Models>>> <<<Experiments>>> We run our experiments on Tesla 960M Graphical Processing Unit (GPU). We evaluated the models using the aforementioned metrics and also applied early stopping (with delta set to 0.1 for 600 training steps). <<<Datasets>>> We use three different datasets for training the models. We use the Dialogue State Tracking Competition 2 (DSTC2) dataset BIBREF27 which is the most widely used dataset for research on task-oriented chatbots. We also used two other datasets recently open-sourced by Google Research BIBREF28 which are M2M-sim-M (dataset in movie domain) and M2M-sim-R (dataset in restaurant domain). M2M stands for Machines Talking to Machines which refers to the framework with which these two datasets were created. In this framework, dialogues are created via dialogue self-play and later augmented via crowdsourcing. We trained on our models on different datasets in order to make sure the results are not corpus-biased. Table TABREF12 shows the statistics of these three datasets which we will use to train and evaluate the models. The M2M dataset has more diversity in both language and dialogue flow compared to the the commonly used DSTC2 dataset which makes it appealing for the task of creating task-oriented chatbots. This is also the reason that we decided to use M2M dataset in our experiments to see how well models can handle a more diversed dataset. <<<Dataset Preparation>>> We followed the data preparation process used for feeding the conversation history into the encoder-decoder as in BIBREF5. Consider a sample dialogue $D$ in the corpus which consists of a number of turns exchanged between the user and the system. $D$ can be represented as ${(u_1, s_1),(u_2, s_2), ...,(u_k, s_k)}$ where $k$ is the number of turns in this dialogue. At each time step in the conversation, we encode the conversation turns up to that time step, which is the context of the dialogue so far, and the system response after that time step will be used as the target. For example, given we are processing the conversation at time step $i$, the context of the conversation so far would be ${(u_1, s_1, u_2, s_2, ..., u_i)}$ and the model has to learn to output ${(s_i)}$ as the target. <<</Dataset Preparation>>> <<</Datasets>>> <<<Training>>> We used the tensor2tensor library BIBREF29 in our experiments for training and evaluation of sequence modeling methods. We use Adam optimizer BIBREF30 for training the models. We set $\beta _1=0.9$, $\beta _2=0.997$, and $\epsilon =1e-9$ for the Adam optimizer and started with learning rate of 0.2 with noam learning rate decay schema BIBREF6. In order to avoid overfitting, we use dropout BIBREF31 with dropout chosen from [0.7-0.9] range. We also conducted early stopping BIBREF14 to avoid overfitting in our experiments as the regularization methods. We set the batch size to 4096, hidden size to 128, and the embedding size to 128 for all the models. We also used grid search for hyperparameter tuning for all of the trained models. Details of our training and hyperparameter tuning and the code for reproducing the results can be found in the chatbot-exp github repository. <<</Training>>> <<<Inference>>> In the inference time, there are mainly two methods for decoding which are greedy and beam search BIBREF32. Beam search has been proved to be an essential part in generative NLP task such as neural machine translation BIBREF33. In the case of dialogue generation systems, beam search could help alleviate the problem of having many possible valid outputs which do not match with the target but are valid and sensible outputs. Consider the case in which a task-oriented chatbot, trained for a restaurant reservation task, in response to the user utterance “Persian food”, generates the response “what time and day would you like the reservation for?” but the target defined for the system is “would you like a fancy restaurant?”. The response generated by the chatbot is a valid response which asks the user about other possible entities but does not match with the defined target. We try to alleviate this problem in inference time by applying the beam search technique with a different beam size $\alpha \in \lbrace 1, 2, 4\rbrace $ and pick the best result based on the BLEU score. Note that when $\alpha = 1$, we are using the original greedy search method for the generation task. <<</Inference>>> <<<Evaluation Measures>>> BLEU: We use the Bilingual Evaluation Understudy (BLEU) BIBREF34 metric which is commonly used in machine translation tasks. The BLEU metric can be used to evaluate dialogue generation models as in BIBREF5, BIBREF35. The BLEU metric is a word-overlap metric which computes the co-occurrence of N-grams in the reference and the generated response and also applies the brevity penalty which tries to penalize far too short responses which are usually not desired in task-oriented chatbots. We compute the BLEU score using all generated responses of our systems. Per-turn Accuracy: Per-turn accuracy measures the similarity of the system generated response versus the target response. Eric and Manning eric2017copy used this metric to evaluate their systems in which they considered their response to be correct if all tokens in the system generated response matched the corresponding token in the target response. This metric is a little bit harsh, and the results may be low since all the tokens in the generated response have to be exactly in the same position as in the target response. Per-Dialogue Accuracy: We calculate per-dialogue accuracy as used in BIBREF8, BIBREF5. For this metric, we consider all the system generated responses and compare them to the target responses. A dialogue is considered to be true if all the turns in the system generated responses match the corresponding turns in the target responses. Note that this is a very strict metric in which all the utterances in the dialogue should be the same as the target and in the right order. F1-Entity Score: Datasets used in task-oriented chores have a set of entities which represent user preferences. For example, in the restaurant domain chatbots common entities are meal, restaurant name, date, time and the number of people (these are usually the required entities which are crucial for making reservations, but there could be optional entities such as location or rating). Each target response has a set of entities which the system asks or informs the user about. Our models have to be able to discern these specific entities and inject them into the generated response. To evaluate our models we could use named-entity recognition evaluation metrics BIBREF36. The F1 score is the most commonly used metric used for the evaluation of named-entity recognition models which is the harmonic average of precision and recall of the model. We calculate this metric by micro-averaging over all the system generated responses. <<</Evaluation Measures>>> <<</Experiments>>> <<<Results and Discussion>>> <<<Comparison of Models>>> The results of running the experiments for the aforementioned models is shown in Table TABREF14 for the DSTC2 dataset and in Table TABREF18 for the M2M datasets. The bold numbers show the best performing model in each of the evaluation metrics. As discussed before, for each model we use different beam sizes (bs) in inference time and report the best one. Our findings in Table TABREF14 show that self-attentional models outperform common recurrence-based sequence modelling methods in the BLEU, Per-turn accuracy, and entity F1 score. The reduction in the evalution numbers for the M2M dataset and in our investigation of the trained model we found that this considerable reduction is due to the fact that the diversity of M2M dataset is considerably more compared to DSTC2 dataset while the traning corpus size is smaller. <<</Comparison of Models>>> <<<Time Performance Comparison>>> Table TABREF22 shows the time performance of the models trained on DSTC2 dataset. Note that in order to get a fair time performance comparison, we trained the models with the same batch size (4096) and on the same GPU. These numbers are for the best performing model (in terms of evaluation loss and selected using the early stopping method) for each of the sequence modelling methods. Time to Convergence (T2C) shows the approximate time that the model was trained to converge. We also show the loss in the development set for that specific checkpoint. <<</Time Performance Comparison>>> <<<Effect of (Self-)Attention Mechanism>>> As discussed before in Section SECREF8, self-attentional models rely on the self-attention mechanism for sequence modelling. Recurrence-based models such as LSTM and Bi-LSTM can also be augmented in order to increase their performance, as evident in Table TABREF14 which shows the increase in the performance of both LSTM and Bi-LSTM when augmented with an attention mechanism. This leads to the question whether we can increase the performance of recurrence-based models by adding multiple attention heads, similar to the multi-head self-attention mechanism used in self-attentional models, and outperform the self-attentional models. To investigate this question, we ran a number of experiments in which we added multiple attention heads on top of Bi-LSTM model and also tried a different number of self-attention heads in self-attentional models in order to compare their performance for this specific task. Table TABREF25 shows the results of these experiments. Note that the models in Table TABREF25 are actually the best models that we found in our experiments on DSTC2 dataset and we only changed one parameter for each of them, i.e. the number of attention heads in the recurrence-based models and the number of self-attention heads in the self-attentional models, keeping all other parameters unchanged. We also report the results of models with beam size of 2 in inference time. We increased the number of attention heads in the Bi-LSTM model up to 64 heads to see its performance change. Note that increasing the number of attention heads makes the training time intractable and time consuming while the model size would increase significantly as shown in Table TABREF24. Furthermore, by observing the results of the Bi-LSTM+Att model in Table TABREF25 (both test and development set) we can see that Bi-LSTM performance decreases and thus there is no need to increase the attention heads further. Our findings in Table TABREF25 show that the self-attention mechanism can outperform recurrence-based models even if the recurrence-based models have multiple attention heads. The Bi-LSTM model with 64 attention heads cannot beat the best Trasnformer model with NH=4 and also its results are very close to the Transformer model with NH=1. This observation clearly depicts the power of self-attentional based models and demonstrates that the attention mechanism used in self-attentional models as the backbone for learning, outperforms recurrence-based models even if they are augmented with multiple attention heads. <<</Effect of (Self-)Attention Mechanism>>> <<</Results and Discussion>>> <<<Conclusion and Future Work>>> We have determined that Transformers and Universal-Transformers are indeed effective at generating appropriate responses in task-oriented chatbot systems. In actuality, their performance is even better than the typically used deep learning architectures. Our findings in Table TABREF14 show that self-attentional models outperform common recurrence-based sequence modelling methods in the BLEU, Per-turn accuracy, and entity F1 score. The results of the Transformer model beats all other models in all of the evaluation metrics. Also, comparing the result of LSTM and LSTM with attention mechanism as well as the Bi-LSTM with Bi-LSTM with attention mechanism, it can be observed in the results that adding the attention mechanism can increase the performance of the models. Comparing the results of self-attentional models shows that the Transformer model outperforms the other self-attentional models, while the Universal Transformer model gives reasonably good results. In future work, it would be interesting to compare the performance of self-attentional models (specifically the winning Transformer model) against other end-to-end architectures such as the Memory Augmented Networks. <<</Conclusion and Future Work>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nRelated Work\nTask-Oriented Chatbots Architectures\nSequence Modelling Methods\nModels\nLSTM and Bi-Directional LSTM\nTransformer\nUniversal Transformer\nExperiments\nDatasets\nDataset Preparation\nTraining\nInference\nEvaluation Measures\nResults and Discussion\nComparison of Models\nTime Performance Comparison\nEffect of (Self-)Attention Mechanism\nConclusion and Future Work" ], "type": "outline" }
1908.06083
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Build it Break it Fix it for Dialogue Safety: Robustness from Adversarial Human Attack <<<Abstract>>> The detection of offensive language in the context of a dialogue has become an increasingly important application of natural language processing. The detection of trolls in public forums (Galan-Garcia et al., 2016), and the deployment of chatbots in the public domain (Wolf et al., 2017) are two examples that show the necessity of guarding against adversarially offensive behavior on the part of humans. In this work, we develop a training scheme for a model to become robust to such human attacks by an iterative build it, break it, fix it strategy with humans and models in the loop. In detailed experiments we show this approach is considerably more robust than previous systems. Further, we show that offensive language used within a conversation critically depends on the dialogue context, and cannot be viewed as a single sentence offensive detection task as in most previous work. Our newly collected tasks and methods will be made open source and publicly available. <<</Abstract>>> <<<Introduction>>> The detection of offensive language has become an important topic as the online community has grown, as so too have the number of bad actors BIBREF2. Such behavior includes, but is not limited to, trolling in public discussion forums BIBREF3 and via social media BIBREF4, BIBREF5, employing hate speech that expresses prejudice against a particular group, or offensive language specifically targeting an individual. Such actions can be motivated to cause harm from which the bad actor derives enjoyment, despite negative consequences to others BIBREF6. As such, some bad actors go to great lengths to both avoid detection and to achieve their goals BIBREF7. In that context, any attempt to automatically detect this behavior can be expected to be adversarially attacked by looking for weaknesses in the detection system, which currently can easily be exploited as shown in BIBREF8, BIBREF9. A further example, relevant to the natural langauge processing community, is the exploitation of weaknesses in machine learning models that generate text, to force them to emit offensive language. Adversarial attacks on the Tay chatbot led to the developers shutting down the system BIBREF1. In this work, we study the detection of offensive language in dialogue with models that are robust to adversarial attack. We develop an automatic approach to the “Build it Break it Fix it” strategy originally adopted for writing secure programs BIBREF10, and the “Build it Break it” approach consequently adapting it for NLP BIBREF11. In the latter work, two teams of researchers, “builders” and “breakers” were used to first create sentiment and semantic role-labeling systems and then construct examples that find their faults. In this work we instead fully automate such an approach using crowdworkers as the humans-in-the-loop, and also apply a fixing stage where models are retrained to improve them. Finally, we repeat the whole build, break, and fix sequence over a number of iterations. We show that such an approach provides more and more robust systems over the fixing iterations. Analysis of the type of data collected in the iterations of the break it phase shows clear distribution changes, moving away from simple use of profanity and other obvious offensive words to utterances that require understanding of world knowledge, figurative language, and use of negation to detect if they are offensive or not. Further, data collected in the context of a dialogue rather than a sentence without context provides more sophisticated attacks. We show that model architectures that use the dialogue context efficiently perform much better than systems that do not, where the latter has been the main focus of existing research BIBREF12, BIBREF5, BIBREF13. Code for our entire build it, break it, fix it algorithm will be made open source, complete with model training code and crowdsourcing interface for humans. Our data and trained models will also be made available for the community. <<</Introduction>>> <<<Related Work>>> The task of detecting offensive language has been studied across a variety of content classes. Perhaps the most commonly studied class is hate speech, but work has also covered bullying, aggression, and toxic comments BIBREF13. To this end, various datasets have been created to benchmark progress in the field. In hate speech detection, recently BIBREF5 compiled and released a dataset of over 24,000 tweets labeled as containing hate speech, offensive language, or neither. The TRAC shared task on Aggression Identification, a dataset of over 15,000 Facebook comments labeled with varying levels of aggression, was released as part of a competition BIBREF14. In order to benchmark toxic comment detection, The Wikipedia Toxic Comments dataset (which we study in this work) was collected and extracted from Wikipedia Talk pages and featured in a Kaggle competition BIBREF12, BIBREF15. Each of these benchmarks examine only single-turn utterances, outside of the context in which the language appeared. In this work we recommend that future systems should move beyond classification of singular utterances and use contextual information to help identify offensive language. Many approaches have been taken to solve these tasks – from linear regression and SVMs to deep learning BIBREF16. The best performing systems in each of the competitions mentioned above (for aggression and toxic comment classification) used deep learning approaches such as LSTMs and CNNs BIBREF14, BIBREF15. In this work we consider a large-pretrained transformer model which has been shown to perform well on many downstream NLP tasks BIBREF17. The broad class of adversarial training is currently a hot topic in machine learning BIBREF18. Use cases include training image generators BIBREF19 as well as image classifiers to be robust to adversarial examples BIBREF20. These methods find the breaking examples algorithmically, rather than by using humans breakers as we do. Applying the same approaches to NLP tends to be more challenging because, unlike for images, even small changes to a sentence can cause a large change in the meaning of that sentence, which a human can detect but a lower quality model cannot. Nevertheless algorithmic approaches have been attempted, for example in text classification BIBREF21, machine translation BIBREF22, dialogue generation tasks BIBREF23 and reading comprehension BIBREF24. The latter was particularly effective at proposing a more difficult version of the popular SQuAD dataset. As mentioned in the introduction, our approach takes inspiration from “Build it Break it” approaches which have been successfully tried in other domains BIBREF10, BIBREF11. Those approaches advocate finding faults in systems by having humans look for insecurities (in software) or prediction failures (in models), but do not advocate an automated approach as we do here. Our work is also closely connected to the “Mechanical Turker Descent” algorithm detailed in BIBREF25 where language to action pairs were collected from crowdworkers by incentivizing them with a game-with-a-purpose technique: a crowdworker receives a bonus if their contribution results in better models than another crowdworker. We did not gamify our approach in this way, but still our approach has commonalities in the round-based improvement of models through crowdworker interaction. <<</Related Work>>> <<<Baselines: Wikipedia Toxic Comments>>> In this section we describe the publicly available data that we have used to bootstrap our build it break it fix it approach. We also compare our model choices with existing work and clarify the metrics chosen to report our results. <<<Wikipedia Toxic Comments>>> The Wikipedia Toxic Comments dataset (WTC) has been collected in a common effort from the Wikimedia Foundation and Jigsaw BIBREF12 to identify personal attacks online. The data has been extracted from the Wikipedia Talk pages, discussion pages where editors can discuss improvements to articles or other Wikipedia pages. We considered the version of the dataset that corresponds to the Kaggle competition: “Toxic Comment Classification Challenge" BIBREF15 which features 7 classes of toxicity: toxic, severe toxic, obscene, threat, insult, identity hate and non-toxic. In the same way as in BIBREF26, every label except non-toxic is grouped into a class offensive while the non-toxic class is kept as the safe class. In order to compare our results to BIBREF26, we similarly split this dataset to dedicate 10% as a test set. 80% are dedicated to train set while the remaining 10% is used for validation. Statistics on the dataset are shown in Table TABREF4. <<</Wikipedia Toxic Comments>>> <<<Models>>> We establish baselines using two models. The first one is a binary classifier built on top of a large pre-trained transformer model. We use the same architecture as in BERT BIBREF17. We add a linear layer to the output of the first token ([CLS]) to produce a final binary classification. We initialize the model using the weights provided by BIBREF17 corresponding to “BERT-base". The transformer is composed of 12 layers with hidden size of 768 and 12 attention heads. We fine-tune the whole network on the classification task. We also compare it the fastText classifier BIBREF27 for which a given sentence is encoded as the average of individual word vectors that are pre-trained on a large corpus issued from Wikipedia. A linear layer is then applied on top to yield a binary classification. <<</Models>>> <<<Experiments>>> We compare the two aforementioned models with BIBREF26 who conducted their experiments with a BiLSTM with GloVe pre-trained word vectors BIBREF28. Results are listed in Table TABREF5 and we compare them using the weighted-F1, i.e. the sum of F1 score of each class weighted by their frequency in the dataset. We also report the F1 of the offensive-class which is the metric we favor within this work, although we report both. (Note that throughout the paper, the notation F1 is always referring to offensive-class F1.) Indeed, in the case of an imbalanced dataset such as Wikipedia Toxic Comments where most samples are safe, the weighted-F1 is closer to the F1 score of the safe class while we focus on detecting offensive content. Our BERT-based model outperforms the method from BIBREF26; throughout the rest of the paper, we use the BERT-based architecture in our experiments. In particular, we used this baseline trained on WTC to bootstrap our approach, to be described subsequently. <<</Experiments>>> <<</Baselines: Wikipedia Toxic Comments>>> <<<Build it Break it Fix it Method>>> In order to train models that are robust to adversarial behavior, we posit that it is crucial collect and train on data that was collected in an adversarial manner. We propose the following automated build it, break it, fix it algorithm: Build it: Build a model capable of detecting offensive messages. This is our best-performing BERT-based model trained on the Wikipedia Toxic Comments dataset described in the previous section. We refer to this model throughout as $A_0$. Break it: Ask crowdworkers to try to “beat the system" by submitting messages that our system ($A_0$) marks as safe but that the worker considers to be offensive. Fix it: Train a new model on these collected examples in order to be more robust to these adversarial attacks. Repeat: Repeat, deploying the newly trained model in the break it phase, then fix it again. See Figure FIGREF6 for a visualization of this process. <<<Break it Details>>> <<<Definition of offensive>>> Throughout data collection, we characterize offensive messages for users as messages that would not be “ok to send in a friendly conversation with someone you just met online." We use this specific language in an attempt to capture various classes of content that would be considered unacceptable in a friendly conversation, without imposing our own definitions of what that means. The phrase “with someone you just met online" was meant to mimic the setting of a public forum. <<</Definition of offensive>>> <<<Crowderworker Task>>> We ask crowdworkers to try to “beat the system" by submitting messages that our system marks as safe but that the worker considers to be offensive. For a given round, workers earn a “game” point each time they are able to “beat the system," or in other words, trick the model by submitting offensive messages that the model marks as safe. Workers earn up to 5 points each round, and have two tries for each point: we allow multiple attempts per point so that workers can get feedback from the models and better understand their weaknesses. The points serve to indicate success to the crowdworker and motivate to achieve high scores, but have no other meaning (e.g. no monetary value as in BIBREF25). More details regarding the user interface and instructions can be found in Appendix SECREF9. <<</Crowderworker Task>>> <<<Models to Break>>> During round 1, workers try to break the baseline model $A_0$, trained on Wikipedia Toxic Comments. For rounds $i$, $i > 1$, workers must break both the baseline model and the model from the previous “fix it" round, which we refer to as $A_{i-1}$. In that case, the worker must submit messages that both $A_0$ and $A_{i-1}$ mark as safe but which the worker considers to be offensive. <<</Models to Break>>> <<</Break it Details>>> <<<Fix it Details>>> During the “fix it" round, we update the models with the newly collected adversarial data from the “break it" round. The training data consists of all previous rounds of data, so that model $A_i$ is trained on all rounds $n$ for $n \le i$, as well as the Wikipedia Toxic Comments data. We split each round of data into train, validation, and test partitions. The validation set is used for hyperparameter selection. The test sets are used to measure how robust we are to new adversarial attacks. With increasing round $i$, $A_i$ should become more robust to increasingly complex human adversarial attacks. <<</Fix it Details>>> <<</Build it Break it Fix it Method>>> <<<Single-Turn Task>>> We first consider a single-turn set-up, i.e. detection of offensive language in one utterance, with no dialogue context or conversational history. <<<Data Collection>>> <<<Adversarial Collection>>> We collected three rounds of data with the build it, break it, fix it algorithm described in the previous section. Each round of data consisted of 1000 examples, leading to 3000 single-turn adversarial examples in total. For the remainder of the paper, we refer to this method of data collection as the adversarial method. <<</Adversarial Collection>>> <<<Standard Collection>>> In addition to the adversarial method, we also collected data in a non-adversarial manner in order to directly compare the two set-ups. In this method – which we refer to as the standard method, we simply ask crowdworkers to submit messages that they consider to be offensive. There is no model to break. Instructions are otherwise the same. In this set-up, there is no real notion of “rounds", but for the sake of comparison we refer to each subsequent 1000 examples collected in this manner as a “round". We collect 3000 examples – or three rounds of data. We refer to a model trained on rounds $n \le i$ of the standard data as $S_i$. <<</Standard Collection>>> <<<Task Formulation Details>>> Since all of the collected examples are labeled as offensive, to make this task a binary classification problem, we will also add safe examples to it. The “safe data" is comprised of utterances from the ConvAI2 chit-chat task BIBREF29, BIBREF30 which consists of pairs of humans getting to know each other by discussing their interests. Each utterance we used was reviewed by two independent crowdworkers and labeled as safe, with the same characterization of safe as described before. For each partition (train, validation, test), the final task has a ratio of 9:1 safe to offensive examples, mimicking the division of the Wikipedia Toxic Comments dataset used for training our baseline models. Dataset statistics for the final task can be found in Table TABREF21. We refer to these tasks – with both safe and offensive examples – as the adversarial and standard tasks. <<</Task Formulation Details>>> <<<Model Training Details>>> Using the BERT-based model architecture described in Section SECREF3, we trained models on each round of the standard and adversarial tasks, multi-tasking with the Wikipedia Toxic Comments task. We weight the multi-tasking with a mixing parameter which is also tuned on the validation set. Finally, after training weights with the cross entropy loss, we adjust the final bias also using the validation set. We optimize for the sensitive class (i.e. offensive-class) F1 metric on the standard and adversarial validation sets respectively. For each task (standard and adversarial), on round $i$, we train on data from all rounds $n$ for $n \le i$ and optimize for performance on the validation sets $n \le i$. <<</Model Training Details>>> <<</Data Collection>>> <<<Experimental Results>>> We conduct experiments comparing the adversarial and standard methods. We break down the results into “break it" results comparing the data collected and “fix it" results comparing the models obtained. <<<Break it Phase>>> Examples obtained from both the adversarial and standard collection methods were found to be clearly offensive, but we note several differences in the distribution of examples from each task, shown in Table TABREF21. First, examples from the standard task tend to contain more profanity. Using a list of common English obscenities and otherwise bad words, in Table TABREF21 we calculate the percentage of examples in each task containing such obscenities, and see that the standard examples contain at least seven times as many as each round of the adversarial task. Additionally, in previous works, authors have observed that classifiers struggle with negations BIBREF8. This is borne out by our data: examples from the single-turn adversarial task more often contain the token “not" than examples from the standard task, indicating that users are easily able to fool the classifier with negations. We also anecdotally see figurative language such as “snakes hiding in the grass” in the adversarial data, which contain no individually offensive words, the offensive nature is captured by reading the entire sentence. Other examples require sophisticated world knowledge such as that many cultures consider eating cats to be offensive. To quantify these differences, we performed a blind human annotation of a sample of the data, 100 examples of standard and 100 examples of adversarial round 1. Results are shown in Table TABREF16. Adversarial data was indeed found to contain less profanity, fewer non-profane but offending words (such as “idiot”), more figurative language, and to require more world knowledge. We note that, as anticipated, the task becomes more challenging for the crowdworkers with each round, indicated by the decreasing average scores in Table TABREF27. In round 1, workers are able to get past $A_0$ most of the time – earning an average score of $4.56$ out of 5 points per round – showcasing how susceptible this baseline is to adversarial attack despite its relatively strong performance on the Wikipedia Toxic Comments task. By round 3, however, workers struggle to trick the system, earning an average score of only $1.6$ out of 5. A finer-grained assessment of the worker scores can be found in Table TABREF38 in the appendix. <<</Break it Phase>>> <<<Fix it Phase>>> Results comparing the performance of models trained on the adversarial ($A_i$) and standard ($S_i$) tasks are summarized in Table TABREF22, with further results in Table TABREF41 in Appendix SECREF40. The adversarially trained models $A_i$ prove to be more robust to adversarial attack: on each round of adversarial testing they outperform standard models $S_i$. Further, note that the adversarial task becomes harder with each subsequent round. In particular, the performance of the standard models $S_i$ rapidly deteriorates between round 1 and round 2 of the adversarial task. This is a clear indication that models need to train on adversarially-collected data to be robust to adversarial behavior. Standard models ($S_i$), trained on the standard data, tend to perform similarly to the adversarial models ($A_i$) as measured on the standard test sets, with the exception of training round 3, in which $A_3$ fails to improve on this task, likely due to being too optimized for adversarial tasks. The standard models $S_i$, on the other hand, are improving with subsequent rounds as they have more training data of the same distribution as the evaluation set. Similarly, our baseline model performs best on its own test set, but other models are not far behind. Finally, we remark that all scores of 0 in Table TABREF22 are by design, as for round $i$ of the adversarial task, both $A_0$ and $A_{i-1}$ classified each example as safe during the `break it' data collection phase. <<</Fix it Phase>>> <<</Experimental Results>>> <<</Single-Turn Task>>> <<<Multi-Turn Task>>> In most real-world applications, we find that adversarial behavior occurs in context – whether it is in the context of a one-on-one conversation, a comment thread, or even an image. In this work we focus on offensive utterances within the context of two-person dialogues. For dialogue safety we posit it is important to move beyond classifying single utterances, as it may be the case that an utterance is entirely innocuous on its own but extremely offensive in the context of the previous dialogue history. For instance, “Yes, you should definitely do it!" is a rather inoffensive message by itself, but most would agree that it is a hurtful response to the question “Should I hurt myself?" <<<Task Implementation>>> To this end, we collect data by asking crowdworkers to try to “beat" our best single-turn classifier (using the model that performed best on rounds 1-3 of the adversarial task, i.e., $A_3$), in addition to our baseline classifier $A_0$. The workers are shown truncated pieces of a conversation from the ConvAI2 chit-chat task, and asked to continue the conversation with offensive responses that our classifier marks as safe. As before, workers have two attempts per conversation to try to get past the classifier and are shown five conversations per round. They are given a score (out of five) at the end of each round indicating the number of times they successfully fooled the classifier. We collected 3000 offensive examples in this manner. As in the single-turn set up, we combine this data with safe examples with a ratio of 9:1 safe to offensive for classifier training. The safe examples are dialogue examples from ConvAI2 for which the responses were reviewed by two independent crowdworkers and labeled as safe, as in the s single-turn task set-up. We refer to this overall task as the multi-turn adversarial task. Dataset statistics are given in Table TABREF30. <<</Task Implementation>>> <<</Multi-Turn Task>>> <<<Conclusion>>> We have presented an approach to build more robust offensive language detection systems in the context of a dialogue. We proposed a build it, break it, fix it, and then repeat strategy, whereby humans attempt to break the models we built, and we use the broken examples to fix the models. We show this results in far more nuanced language than in existing datasets. The adversarial data includes less profanity, which existing classifiers can pick up on, and is instead offensive due to figurative language, negation, and by requiring more world knowledge, which all make current classifiers fail. Similarly, offensive language in the context of a dialogue is also more nuanced than stand-alone offensive utterances. We show that classifiers that learn from these more complex examples are indeed more robust to attack, and that using the dialogue context gives improved performance if the model architecture takes it into account. In this work we considered a binary problem (offensive or safe). Future work could consider classes of offensive language separately BIBREF13, or explore other dialogue tasks, e.g. from social media or forums. Another interesting direction is to explore how our build it, break it, fix it strategy would similarly apply to make neural generative models safe BIBREF31. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nRelated Work\nBaselines: Wikipedia Toxic Comments\nWikipedia Toxic Comments\nModels\nExperiments\nBuild it Break it Fix it Method\nBreak it Details\nDefinition of offensive\nCrowderworker Task\nModels to Break\nFix it Details\nSingle-Turn Task\nData Collection\nAdversarial Collection\nStandard Collection\nTask Formulation Details\nModel Training Details\nExperimental Results\nBreak it Phase\nFix it Phase\nMulti-Turn Task\nTask Implementation\nConclusion" ], "type": "outline" }
1911.05153
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Improving Robustness of Task Oriented Dialog Systems <<<Abstract>>> Task oriented language understanding in dialog systems is often modeled using intents (task of a query) and slots (parameters for that task). Intent detection and slot tagging are, in turn, modeled using sentence classification and word tagging techniques respectively. Similar to adversarial attack problems with computer vision models discussed in existing literature, these intent-slot tagging models are often over-sensitive to small variations in input -- predicting different and often incorrect labels when small changes are made to a query, thus reducing their accuracy and reliability. However, evaluating a model's robustness to these changes is harder for language since words are discrete and an automated change (e.g. adding `noise') to a query sometimes changes the meaning and thus labels of a query. In this paper, we first describe how to create an adversarial test set to measure the robustness of these models. Furthermore, we introduce and adapt adversarial training methods as well as data augmentation using back-translation to mitigate these issues. Our experiments show that both techniques improve the robustness of the system substantially and can be combined to yield the best results. <<</Abstract>>> <<<Introduction>>> In computer vision, it is well known that otherwise competitive models can be "fooled" by adding intentional noise to the input images BIBREF0, BIBREF1. Such changes, imperceptible to the human eye, can cause the model to reverse its initial correct decision on the original input. This has also been studied for Automatic Speech Recognition (ASR) by including hidden commands BIBREF2 in the voice input. Devising such adversarial examples for machine learning algorithms, in particular for neural networks, along with defense mechanisms against them, has been of recent interest BIBREF3. The lack of smoothness of the decision boundaries BIBREF4 and reliance on weakly correlated features that do not generalize BIBREF5 seem to be the main reasons for confident but incorrect predictions for instances that are far from the training data manifold. Among the most successful techniques to increase resistance to such attacks is perturbing the training data and enforcing the output to remain the same BIBREF4, BIBREF6. This is expected to improve the smoothing of the decision boundaries close to the training data but may not help with points that are far from them. There has been recent interest in studying this adversarial attack phenomenon for natural language processing tasks, but that is harder than vision problems for at least two reasons: 1) textual input is discrete, and 2) adding noise may completely change a sentence's meaning or even make it meaningless. Although there are various works that devise adversarial examples in the NLP domain, defense mechanisms have been rare. BIBREF7 applied perturbation to the continuous word embeddings instead of the discrete tokens. This has been shown BIBREF8 to act as a regularizer that increases the model performance on the clean dataset but the perturbed inputs are not true adversarial examples, as they do not correspond to any input text and it cannot be tested whether they are perceptible to humans or not. Unrestricted adversarial examples BIBREF9 lift the constraint on the size of added perturbation and as such can be harder to defend against. Recently, Generative Adversarial Networks (GANs) alongside an auxiliary classifier have been proposed to generate adversarial examples for each label class. In the context of natural languages, use of seq2seq models BIBREF10 seems to be a natural way of perturbing an input example BIBREF11. Such perturbations, that practically paraphrase the original sentence, lie somewhere between the two methods described above. On one hand, the decoder is not constrained to be in a norm ball from the input and, on the other hand, the output is strongly conditioned on the input and hence, not unrestricted. Current NLP work on input perturbations and defense against them has mainly focused on sentence classification. In this paper, we examine a harder task: joint intent detection (sentence classification) and slot tagging (sequence word tagging) for task oriented dialog, which has been of recent interest BIBREF12 due to the ubiquity of commercial conversational AI systems. In the task and data described in Section SECREF2, we observe that exchanging a word with its synonym, as well as changing the structural order of a query can flip the model prediction. Table TABREF1 shows a few such sentence pairs for which the model prediction is different. Motivated by this, in this paper, we focus on analyzing the model robustness against two types of untargeted (that is, we do not target a particular perturbed label) perturbations: paraphrasing and random noise. In order to evaluate the defense mechanisms, we discuss how one can create an adversarial test set focusing on these two types of perturbations in the setting of joint sentence classification and sequence word tagging. Our contributions are: 1. Analyzing the robustness of the joint task of sentence classification and sequence word tagging through generating diverse untargeted adversarial examples using back-translation and noisy autoencoder, and 2. Two techniques to improve upon a model's robustness – data augmentation using back-translation, and adversarial logit pairing loss. Data augmentation using back-translation was earlier proposed as a defense mechanism for a sentence classification task BIBREF11; we extend it to sequence word tagging. We investigate using different types of machine translation systems, as well as different auxiliary languages, for both test set generation and data augmentation. Logit pairing was proposed for improving the robustness in the image classification setting with norm ball attacks BIBREF6; we extend it to the NLP context. We show that combining the two techniques gives the best results. <<</Introduction>>> <<<Task and Data>>> block = [text width=15em, text centered] In conversational AI, the language understanding task typically consists of classifying the intent of a sentence and tagging the corresponding slots. For example, a query like What's the weather in Sydney today could be annotated as a weather/find intent, with Sydney and today being location and datetime slots, respectively. This predicted intent then informs which API to call to answer the query and the predicted slots inform the arguments for the call. See Fig. FIGREF2. Slot tagging is arguably harder compared to intent classification since the spans need to align as well. We use the data provided by BIBREF13, which consists of task-oriented queries in weather and alarm domains. The data contains 25k training, 3k evaluation and 7k test queries with 11 intents and 7 slots. We conflate and use a common set of labels for the two domains. Since there is no ambiguous slot or intent in the domains, unlike BIBREF14, we do not need to train a domain classifier, neither jointly nor at the beginning of the pipeline. If a query is not supported by the system but it is unambiguously part of the alarm or weather domains, they are marked as alarm/unsupported and weather/unsupported respectively. <<</Task and Data>>> <<<Robustness Evaluation>>> To evaluate model robustness, we devise a test set consisting of ‘adversarial’ examples, i.e, perturbed examples that can potentially change the base model's prediction. These could stem from paraphrasing a sentence, e.g., lexical and syntactical changes. We use two approaches described in literature: back-translation and noisy sequence autoencoder. Note that these examples resemble black-box attacks but are not intentionally designed to fool the system and hence, we use the term 'adversarial' broadly. We use these techniques to produce many paraphrases and find a subset of utterances that though very similar to the original test set, result in wrong predictions. We will measure the model robustness against such changes. Also note that to make the test set hard, we select only the examples for which the model prediction is different for the paraphrased sentence compared to the original sentence. We, however, do not use the original annotation for the perturbed sentences – instead, we re-annotate the sentences manually. We explain the motivation and methodology for manual annotation later in this section. <<<Automatically Generating Examples>>> We describe two methods of devising untargeted (not targeted towards a particular label) paraphrase generation to find a subset that dramatically reduce the accuracy of the model mentioned in the previous section. We follow BIBREF11 and BIBREF15 to generate the potential set of sentences. <<<Back-translation>>> Back-translation is a common technique in Machine Translation (MT) to improve translation performance, especially for low-resource language pairs BIBREF16, BIBREF17, BIBREF18. In back-translation, a MT system is used to translate the original sentences to an auxiliary language and a reverse MT system translates them back into the original language. At the final decoding phase, the top k beams are the variations of the original sentence. See Fig. FIGREF5. BIBREF11 which showed the effectiveness of simple back-translation in quickly generating adversarial paraphrases and breaking the correctly predicted examples. To increase diversity, we use two different MT systems and two different auxiliary languages - Czech (cs) and Spanish (es), to use with our training data in English (en). We use the Nematus BIBREF19 pre-trained cs-en model, which was also used in BIBREF11, as well as the FB internal MT system with pre-trained models for cs-en and es-en language pairs. <<</Back-translation>>> <<<Noisy Sequence Autoencoder>>> Following BIBREF15, we train a sequence autoencoder BIBREF20 using all the training data. At test time, we add noise to the last hidden state of the encoder, which is used to decode a variation. We found that not using attention results in more diverse examples, by giving the model more freedom to stray from the original sentence. We again decode the top k beams as variations to the original sentence. We observed that the seq2seq model results in less meaningful sentences than using the MT systems, which have been trained over millions of sentences. <<</Noisy Sequence Autoencoder>>> <<</Automatically Generating Examples>>> <<<Annotation>>> For each of the above methods, we use the original test data and generate paraphrases using k=5 beams. We remove the beams that are the same as the original sentence after lower-casing. In order to make sure we have a high-quality adversarial test set, we need to manually check the model's prediction on the above automatically-generated datasets. Unlike targeted methods to procure adversarial examples, our datasets have been generated by random perturbations in the original sentences. Hence, we expect that the true adversarial examples would be quite sparse. In order to obviate the need for manual annotation of a large dataset to find these sparse examples, we sample only from the paraphrases for which the predicted intent is different from the original sentence's predicted intent. This significantly increases the chance of encountering an adversarial example. Note that the model accuracy on this test set might not be zero for two reasons: 1) the flipped intent might actually be justified and not a mistake. For example, “Cancel the alarm” and “Pause the alarm” may be considered as paraphrases, but in the dataset they correspond to alarm/cancel and alarm/pause intents, respectively, and 2) the model might have been making an error in the original prediction, which was corrected by the paraphrase. (However, upon manual observation, this rarely happens). The other reason that we need manual annotation is that such unrestricted generation may result in new variations that can be meaningless or ambiguous without any context. Note that if the meaning can be easily inferred, we do not count slight grammatical errors as meaningless. Thus, we manually double annotate the sentences with flipped intent classification where the disagreements are resolved by a third annotator. As a part of this manual annotation, we also remove the meaningless and ambiguous sentences. Note that these adversarial examples are untargeted, i.e., we had no control in which new label a perturbed example would be sent to. <<</Annotation>>> <<<Analysis>>> We have shown adversarial examples from different sources alongside their original sentence in Table TABREF3. We observe that some patterns, such as addition of a definite article or gerund appear more often in the es test set which perhaps stems from the properties of the Spanish language (i.e., most nouns have an article and present simple/continuous tense are often interchangeable). On the other hand, there is more verbal diversity in the cs test set which may be because of the linguistic distance of Czech from English compared with Spanish. Moreover, we observe many imperative-to-declarative transformation in all the back-translated examples. Finally, the seq2seq examples seem to have a higher degree of freedom but that can tip them off into the meaningless realm more often too. <<</Analysis>>> <<</Robustness Evaluation>>> <<<Base Model>>> A commonly used architecture for the task described in Section SECREF2 is a bidirectional LSTM for the sentence representation with separate projection layers for sentence (intent) classification and sequence word (slot) tagging BIBREF21, BIBREF22, BIBREF12, BIBREF14. In order to evaluate the model in a task oriented setting, exact match accuracy (from now on, accuracy) is of paramount importance. This is defined as the percentage of the sentences for which the intent and all the slots have been correctly tagged. We use two biLSTM layers of size 200 and two feed-forward layers for the intents and the slots. We use dropout of $0.3$ and train the model for 20 epochs with learning rate of $0.01$ and weight decay of $0.001$. This model, our baseline, achieves $87.1\%$ accuracy over the test set. The performance of the base model described in the previous section is shown in the first row of Table TABREF8 for the Nematus cs-en ($\bar{cs}$), FB MT system cs-en (cs) and es-en (es), sequence autoencoder (seq2seq), and the average of the adversarial sets (avg). We also included the results for the ensemble model, which combines the decisions of five separate baseline models that differ in batch order, initialization, and dropout masking. We can see that, similar to the case in computer vision BIBREF4, the adversarial examples seem to stem from fundamental properties of the neural networks and ensembling helps only a little. <<</Base Model>>> <<<Approaches to Improve Robustness>>> In order to improve robustness of the base model against paraphrases and random noise, we propose two approaches: data augmentation and model smoothing via adversarial logit pairing. Data augmentation generates and adds training data without manual annotation. This would help the model see variations that it has not observed in the original training data. As discussed before, back-translation is one way to generate unlabeled data automatically. In this paper, we show how we can automatically generate labels for such sentences during training time and show that it improves the robustness of the model. Note that for our task we have to automatically label both sentence labels (intent) and word tags (slots) for such sentences. The second method we propose is adding logit pairing loss. Unlike data augmentation, logit pairing treats the original and paraphrased sentence sets differently. As such, in addition to the cross-entropy loss over the original training data, we would have another loss term enforcing that the predictions for a sentence and its paraphrases are similar in the logit space. This would ensure that the model makes smooth decisions and prevent the model from making drastically different decisions with small perturbations. <<<Data Augmentation>>> We generate back-translated data from the training data using pre-trained FB MT system. We keep the top 5 beams after the back-translation and remove the beams that already exist in the training data after lower-casing. We observed that including the top 5 beams results in quite diverse combinations without hurting the readability of the sentences. In order to use the unlabeled data, we use an extended version of self training BIBREF23 in which the original classifier is used to annotate the unlabeled data. Unsurprisingly, self-training can result in reinforcing the model errors. Since the sentence intents usually remain the same after paraphrasing for each paraphrase, we annotate its intent as the intent of the original sentence. Since many slot texts may be altered or removed during back-translation, we use self-training to label the slots of the paraphrases. We train the model on the combined clean and noisy datasets with the loss function being the original loss plus the loss on back-translated data weighted by 0.1 for which the accuracy on the clean dev set is still negligible. The model seemed to be quite insensitive against this weight, though and the clean dev accuracy was hurt by less than 1 point using weighing the augmented data equally as the original data. The accuracy over the clean test set using the augmented training data having Czech (cs) and Spanish (es) as the auxiliary languages are shown in Table TABREF8. We observe that, as expected, data augmentation improves accuracy on sentences generated using back-translation, however we see that it also improves accuracy on sentences generated using seq2seq autoencoder. We discuss the results in more detail in the next section. <<</Data Augmentation>>> <<<Model smoothing via Logit Pairing>>> BIBREF6 perturb images with the attacks introduced by BIBREF3 and report state-of-the-art results by matching the logit distribution of the perturbed and original images instead of matching only the classifier decision. They also introduce clean pairing in which the logit pairing is applied to random data points in the clean training data, which yields surprisingly good results. Here, we modify both methods for the language understanding task, including sequence word tagging, and expand the approach to targeted pairing for increasing robustness against adversarial examples. <<<Clean Logit Pairing>>> Pairing random queries as proposed by BIBREF6 performed very poorly on our task. In the paper, we study the effects when we pair the sentences that have the same annotations, i.e., same intent and same slot labels. Consider a batch $M$, with $m$ clean sentences. For each tuple of intent and slot labels, we identify corresponding sentences in the batch, $M_k$ and sample pairs of sentences. We add a second cost function to the original cost function for the batch that enforces the logit vectors of the intent and same-label slots of those pairs of sentences to have similar distributions: where $I^{(i)}$ and $S^{(i)}_s$ denote the logit vectors corresponding to the intent and $s^{th}$ slot of the $i^{th}$ sentence, respectively. Moreover, $P$ is the total number of sampled pairs, and $\lambda _{sf}$ is a hyper-parameter. We sum the above loss for all the unique tuples of labels and normalize by the total number of pairs. Throughout this section, we use MSE loss for the function $L()$. We train the model with the same parameters as in Section SECREF2, with the only difference being that we use learning rate of $0.001$ and train for 25 epochs to improve model convergence. Contrary to what we expected, clean logit pairing on its own reduces accuracy on both clean and adversarial test sets. Our hypothesis is that the logit smoothing resulted by this method prevents the model from using weakly correlated features BIBREF5, which could have helped the accuracy of both the clean and adversarial test sets. <<</Clean Logit Pairing>>> <<<Adversarial Logit Pairing (ALP)>>> In order to make the model more robust to paraphrases, we pair a sentence with its back-translated paraphrases and impose the logit distributions to be similar. We generate the paraphrases using the FB MT system as in the previous section using es and cs as auxiliary languages. For the sentences $m^{(i)}$ inside the mini-batch and their paraphrase $\tilde{m}^{(i)}_k$, we add the following loss where $P$ is the total number of original-paraphrase sentence pairs. Note that the first term, which pairs the logit vectors of the predicted intents of a sentence and its paraphrase, can be obtained in an unsupervised fashion. For the second term however, we need to know the position of the slots in the paraphrases in order to be matched with the original slots. We use self-training again to tag the slots in the paraphrased sentence. Then, we pair the logit vectors corresponding to the common labels found among the original and paraphrases slots left to right. We also find that adding a similar loss for pairs of paraphrases of the original sentence, i.e. matching the logit vectors corresponding to the intent and slots, can help the performance on the accuracy over the adversarial test sets. In Table TABREF8, we show the results using ALP (using both the original-paraphrase and paraphrase-paraphrase pairs) for $\lambda _a=0.01$. <<</Adversarial Logit Pairing (ALP)>>> <<</Model smoothing via Logit Pairing>>> <<</Approaches to Improve Robustness>>> <<<Results and Discussion>>> We observe that data augmentation using back-translation improves the accuracy across all the adversarial sets, including the seq2seq test set. Unsurprisingly, the gains are the highest when augmenting the training data using the same MT system and the same auxiliary language that the adversarial test set was generated from. However, more interestingly, it is still effective for adversarial examples generated using a different auxiliary language or a different MT system (which, as discussed in the previous section, yielded different types of sentences) from that which was used at the training time. More importantly, even if the generation process is different altogether, that is, the seq2seq dataset generated by the noisy autoencoder, some of the gains are still transferred and the accuracy over the adversarial examples increases. We also train a model using the es and cs back-translated data combined. Table TABREF8 shows that this improves the average performance over the adversarial sets. This suggests that in order to achieve robustness towards different types of paraphrasing, we would need to augment the training data using data generated with various techniques. But one can hope that some of the defense would be transferred for adversarial examples that come from unknown sources. Note that unlike the manually annotated test sets, the augmented training data contains noise both in the generation step (e.g. meaningless utterances) as well as in the automatic annotation step. But the model seems to be quite robust toward this random noise; its accuracy over the clean test set is almost unchanged while yielding nontrivial gains over the adversarial test sets. We observe that ALP results in similarly competitive performance on the adversarial test sets as using the data augmentation but it has a more detrimental effect on the clean test set accuracy. We hypothesize that data augmentation helps with smoothing the decision boundaries without preventing the model from using weakly correlated features. Hence, the regression on the clean test set is very small. This is in contrast with adversarial defense mechanisms such as ALP BIBREF5 which makes the model regress much more on the clean test set. We also combine ALP with the data augmentation technique that yields the highest accuracy on the adversarial test sets but incurs additional costs to the clean test set (more than three points compared with the base model). Adding clean logit pairing to the above resulted in the most defense transfer (i.e. accuracy on the seq2seq adversarial test set) but it is detrimental to almost all the other metrics. One possible explanation can be that the additional regularization stemming from the clean logit pairing helps with generalization (and hence, the transfer) from the back-translated augmented data to the seq2seq test set but it is not helpful otherwise. <<</Results and Discussion>>> <<<Related Work>>> Adversarial examples BIBREF4 refer to intentionally devised inputs by an adversary which causes the model's accuracy to make highly-confident but erroneous predictions, e.g. Fast Gradient Sign Attack (FGSA) BIBREF4 and Projected gradient Descent (PGD) BIBREF3. In such methods, the constrained perturbation that (approximately) maximizes the loss for an original data point is added to it. In white-box attacks, the perturbations are chosen to maximize the model loss for the original inputs BIBREF4, BIBREF3, BIBREF24. Such attacks have shown to be transferable to other models which makes it possible to devise black-box attacks for a machine learning model by transferring from a known model BIBREF25, BIBREF1. Defense against such examples has been an elusive task, with proposed mechanisms proving effective against only particular attacks BIBREF3, BIBREF26. Adversarial training BIBREF4 augments the training data with carefully picked perturbations during the training time, which is robust against normed-ball perturbations. But in the general setting of having unrestricted adversarial examples, these defenses have been shown to be highly ineffective BIBREF27. BIBREF28 introduced white-box attacks for language by swapping one token for another based on the gradient of the input. BIBREF29 introduced an algorithm to generate adversarial examples for sentiment analysis and textual entailment by replacing words of the sentence with similar tokens that preserve the language model scoring and maximize the target class probability. BIBREF7 introduced one of the few defense mechanisms for NLP by extending adversarial training to this domain by perturbing the input embeddings and enforcing the label (distribution) to remain unchanged. BIBREF30 and BIBREF8 used this strategy as a regularization method for part-of-speech, relation extraction and NER tasks. Such perturbations resemble the normed-ball attacks for images but the perturbed input does not correspond to a real adversarial example. BIBREF11 studied two methods of generating adversarial data – back-translation and syntax-controlled sequence-to-sequence generation. They show that although the latter method is more effective in generating syntactically diverse examples, the former is also a fast and effective way of generating adversarial examples. There has been a large body of literature on language understanding for task oriented dialog using the intent/slot framework. Bidirectional LSTM for the sentence representation alongside separate projection layers for intent and slot tagging is the typical architecture for the joint task BIBREF21, BIBREF22, BIBREF12, BIBREF14. In parallel to the current work, BIBREF31 introduced unsupervised data augmentation for classification tasks by perturbing the training data and similar to BIBREF7 minimize the KL divergence between the predicted distributions on an unlabeled example and its perturbations. Their goal is to achieve high accuracy using as little labeled data as possible by leveraging the unlabeled data. In this paper, we have focused on increasing the model performance on adversarial test sets in supervised settings while constraining the degradation on the clean test set. Moreover, we focused on a more complicated task: the joint classification and sequence tagging task. <<</Related Work>>> <<<Conclusion>>> In this paper, we study the robustness of language understanding models for the joint task of sentence classification and sequence word tagging in the field of task oriented dialog by generating adversarial test sets. We further discuss defense mechanisms using data augmentation and adversarial logit pairing loss. We first generate adversarial test sets using two methods, back-translation with two languages and sequence auto-encoder, and observe that the two methods generate different types of sentences. Our experiments show that creating the test set using a combination of the two methods above is better than either method alone, based on the model's performance on the test sets. Secondly, we propose how to improve the model's robustness against such adversarial test sets by both augmenting the training data and using a new loss function based on logit pairing with back-translated paraphrases annotated using self-training. The experiments show that combining data augmentation using back-translation and adversarial logit pairing loss performs best on the adversarial test sets. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nTask and Data\nRobustness Evaluation\nAutomatically Generating Examples\nBack-translation\nNoisy Sequence Autoencoder\nAnnotation\nAnalysis\nBase Model\nApproaches to Improve Robustness\nData Augmentation\nModel smoothing via Logit Pairing\nClean Logit Pairing\nAdversarial Logit Pairing (ALP)\nResults and Discussion\nRelated Work\nConclusion" ], "type": "outline" }
2004.01670
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Directions in Abusive Language Training Data: Garbage In, Garbage Out <<<Abstract>>> Data-driven analysis and detection of abusive online content covers many different tasks, phenomena, contexts, and methodologies. This paper systematically reviews abusive language dataset creation and content in conjunction with an open website for cataloguing abusive language data. This collection of knowledge leads to a synthesis providing evidence-based recommendations for practitioners working with this complex and highly diverse data. <<</Abstract>>> <<<Introduction>>> Abusive online content, such as hate speech and harassment, has received substantial attention over the past few years for its malign social effects. Left unchallenged, abusive content risks harminng those who are targeted, toxifying public discourse, exacerbating social tensions and could lead to the exclusion of some groups from public spaces. As such, systems which can accurately detect and classify online abuse at scale, in real-time and without bias are of central interest to tech companies, policymakers and academics. Most detection systems rely on having the right training dataset, reflecting one of the most widely accepted mantras in computer science: Garbage In, Garbage Out. Put simply: to have systems which can detect and classify abusive online content effectively, one needs appropriate datasets with which to train them. However, creating training datasets is often a laborious and non-trivial task – and creating datasets which are non-biased, large and theoretically-informed is even more difficult (BIBREF0 p. 189). We address this issue by examining and reviewing publicly available datasets for abusive content detection, which we provide access to on a new dedicated website, hatespeechdata.com. In the first section, we examine previous reviews and present the four research aims which guide this paper. In the second section, we conduct a critical and in-depth analysis of the available datasets, discussing first what their aim is, how tasks have been described and what taxonomies have been constructed and then, second, what they contain and how they were annotated. In the third section, we discuss the challenges of open science in this research area and elaborates different ways of sharing training datasets, including the website hatespeechdata.com In the final section, we draw on our findings to establish best practices when creating datasets for abusive content detection. <<</Introduction>>> <<<Background>>> The volume of research examining the social and computational aspects of abusive content detection has expanded prodigiously in the past five years. This has been driven by growing awareness of the importance of the Internet more broadly BIBREF1, greater recognition of the harms caused by online abuse BIBREF2, and policy and regulatory developments, such as the EU's Code of Conduct on Hate, the UK Government's `Online Harms' white paper BIBREF3, Germany's NetzDG laws, the Public Pledge on Self-Discipline for the Chinese Internet Industry, and France's anti-hate regulation BIBREF2. In 2020 alone, three computer science venues will host workshops on online hate (TRAC and STOC at LREC, and WOAH at EMNLP), and a shared task at 2019's SemEval on online abuse detection reports that 800 teams downloaded the training data and 115 submitted detection systems BIBREF4. At the same time, social scientific interventions have also appeared, deepening our understanding of how online abuse spreads BIBREF5 and how its harmful impact can be mitigated and challenged BIBREF6. All analyses of online abuse ultimately rely on a way of measuring it, which increasingly means having a method which can handle the sheer volume of content produced, shared and engaged with online. Traditional qualitative methods cannot scale to handle the hundreds of millions of posts which appear on each major social media platform every day, and can also introduce inconsistencies and biase BIBREF7. Computational tools have emerged as the most promising way of classifying and detecting online abuse, drawing on work in machine learning, Natural Language Processing (NLP) and statistical modelling. Increasingly sophisticated architectures, features and processes have been used to detect and classify online abuse, leveraging technically sophisticated methods, such as contextual word embeddings, graph embeddings and dependency parsing. Despite their many differences BIBREF8, nearly all methods of online abuse detection rely on a training dataset, which is used to teach the system what is and is not abuse. However, there is a lacuna of research on this crucial aspect of the machine learning process. Indeed, although several general reviews of the field have been conducted, no previous research has reviewed training datasets for abusive content detection in sufficient breadth or depth. This is surprising given (i) their fundamental importance in the detection of online abuse and (ii) growing awareness that several existing datasets suffer from many flaws BIBREF9, BIBREF10. Close relevant work includes: Schmidt and Wiegand conduct a comprehensive review of research into the detection and classification of abusive online content. They discuss training datasets, stating that `to perform experiments on hate speech detection, access to labelled corpora is essential' (BIBREF8, p. 7), and briefly discuss the sources and size of the most prominent existing training datasets, as well as how datasets are sampled and annotated. Schmidt and Wiegand identify two key challenges with existing datasets. First, `data sparsity': many training datasets are small and lack linguistic variety. Second, metadata (such as how data was sampled) is crucial as it lets future researchers understand unintended biases, but is often not adequately reported (BIBREF8, p. 6). Waseem et al.BIBREF11 outline a typology of detection tasks, based on a two-by-two matrix of (i) identity- versus person- directed abuse and (ii) explicit versus implicit abuse. They emphasise the importance of high-quality datasets, particularly for more nuanced expressions of abuse: `Without high quality labelled data to learn these representations, it may be difficult for researchers to come up with models of syntactic structure that can help to identify implicit abuse.' (BIBREF11, p. 81) Jurgens et al.BIBREF12 discuss also conduct a critical review of hate speech detection, and note that `labelled ground truth data for building and evaluating classifiers is hard to obtain because platforms typically do not share moderated content due to privacy, ethical and public relations concerns.' (BIBREF12, p. 3661) They argue that the field needs to `address the data scarcity faced by abuse detection research' in order to better address more complex rsearch issues and pressing social challenges, such as `develop[ing] proactive technologies that counter or inhibit abuse before it harms' (BIBREF12, pp. 3658, 3661). Vidgen et al. describe several limitations with existing training datasets for abusive content, most noticeably how `they contain systematic biases towards certain types and targets of abuse.' BIBREF13[p.2]. They describe three issues in the quality of datasets: degradation (whereby datasets decline in quality over time), annotation (whereby annotators often have low agreement, indicating considerable uncertainty in class assignments) and variety (whereby `The quality, size and class balance of datasets varies considerably.' [p. 6]). Chetty and AlathurBIBREF14 review the use of Internet-based technologies and online social networks to study the spread of hateful, offensive and extremist content BIBREF14. Their review covers both computational and legal/social scientific aspects of hate speech detection, and outlines the importance of distinguishing between different types of group-directed prejudice. However, they do not consider training datasets in any depth. Fortuna and NunesBIBREF15 provide an end-to-end review of hate speech research, including the motivations for studying online hate, definitional challenges, dataset creation/sharing, and technical advances, both in terms of feature selection and algorithmic architecture (BIBREF15, 2018). They delineate between different types of online abuse, including hate, cyberbullying, discrimination and flaming, and add much needed clarity to the field. They show that (1) dataset size varies considerably but they are generally small (mostly containing fewer than 10,000 entries), (2) Twitter is the most widely-studied platform, and (3) most papers research hate speech per se (i.e. without specifying a target). Of those which do specify a target, racism and sexism are the most researched. However, their review focuses on publications rather than datasets: the same dataset might be used in multiple studies, limiting the relevance of their review for understanding the intrinsic role of training datasets. They also only engage with datasets fairly briefly, as part of a much broader review. Several classification papers also discuss the most widely used datasets, including Davidson et al. BIBREF16 who describe five datasets, and Salminen et al. who review 17 datasets and describe four in detail BIBREF17. This paper addresses this lacuna in existing research, providing a systematic review of available training datasets for online abuse. To provide structure to this review, we adopt the `data statements' framework put forward by Bender and Friedman BIBREF18, as well as other work providing frameworks, schema and processes for analysing NLP artefacts BIBREF19, BIBREF20, BIBREF21. Data statements are a way of documenting the decisions which underpin the creation of datasets used for Natural Language Processing (NLP). They formalise how decisions should be documented, not only ensuring scientific integrity but also addressing `the open and urgent question of how we integrate ethical considerations in the everyday practice of our field' (BIBREF18, p. 587). In many cases, we find that it is not possible to fully recreate the level of detail recorded in an original data statement from how datasets are described in publications. This reinforces the importance of proper documentation at the point of dataset creation. As the field of online abusive content detection matures, it has started to tackle more complex research challenges, such as multi-platform, multi-lingual and multi-target abuse detection, and systems are increasingly being deployed in `the wild' for social scientific analyses and for content moderation BIBREF5. Such research heightens the focus on training datasets as exactly what is being detected comes under greater scrutiny. To enhance our understanding of this domain, our review paper has four research aims. Research Aim One: to provide an in-depth and critical analysis of the available training datasets for abusive online content detection. Research Aim Two: to map and discuss ways of addressing the lack of dataset sharing, and as such the lack of `open science', in the field of online abuse research. Research Aim Three: to introduce the website hatespeechdata.com, as a way of enabling more dataset sharing. Research Aim Four: to identify best practices for creating an abusive content training dataset. <<</Background>>> <<<Analysis of training datasets>>> Relevant publications have been identified from four sources to identify training datasets for abusive content detection: The Scopus database of academic publications, identified using keyword searches. The ACL Anthology database of NLP research papers, identified using keyword searches. The ArXiv database of preprints, identified using keyword searches. Proceedings of the 1st, 2nd and 3rd workshops on abusive language online (ACL). Most publications report on the creation of one abusive content training dataset. However, some describe several new datasets simultaneously or provide one dataset with several distinct subsets of data BIBREF22, BIBREF23, BIBREF24, BIBREF25. For consistency, we separate out each subset of data where they are in different languages or the data is collected from different platforms. As such, the number of datasets is greater than the number publications. All of the datasets were released between 2016 and 2019, as shown in Figure FIGREF17. <<<The purpose of training datasets>>> <<<Problems addressed by datasets>>> Creating a training dataset for online abuse detection is typically motivated by the desire to address a particular social problem. These motivations can inform how a taxonomy of abusive language is designed, how data is collected and what instructions are given to annotators. We identify the following motivating reasons, which were explicitly referenced by dataset creators. Reducing harm: Aggressive, derogatory and demeaning online interactions can inflict harm on individuals who are targeted by such content and those who are not targeted but still observe it. This has been shown to have profound long-term consequences on individuals' well-being, with some vulnerable individuals expressing concerns about leaving their homes following experiences of abuse BIBREF26. Accordingly, many dataset creators state that aggressive language and online harassment is a social problem which they want to help address Removing illegal content: Many countries legislate against certain forms of speech, e.g. direct threats of violence. For instance, the EU's Code of Conduct requires that all content that is flagged for being illegal online hate speech is reviewed within 24 hours, and removed if necessary BIBREF27. Many large social media platforms and tech companies adhere to this code of conduct (including Facebook, Google and Twitter) and, as of September 2019, 89% of such content is reviewed in 24 hours BIBREF28. However, we note that in most cases the abuse that is marked up in training datasets falls short of the requirements of illegal online hate – indeed, as most datasets are taken from public API access points, the data has usually already been moderated by the platforms and most illegal content removed. Improving health of online conversations: The health of online communities can be severely affected by abusive language. It can fracture communities, exacerbate tensions and even repel users. This is not only bad for the community and for civic discourse in general, it also negatively impacts engagement and thus the revenue of the host platforms. Therefore, there is a growing impetus to improve user experience and ensure online dialogues are healthy, inclusive and respectful where possible. There is ample scope for improvement: a study showed that 82% of personal attacks on Wikipedia against other editors are not addressed BIBREF29. Taking steps to improve the health of exchanges in online communities will also benefit commercial and voluntary content moderators. They are routinely exposed to such content, often with insufficient safeugards, and sometimes display symptoms similar to those of PTSD BIBREF30. Automatic tools could help to lessen this exposure, reducing the burden on moderators. <<</Problems addressed by datasets>>> <<<Uses of datasets: How detection tasks are defined>>> Myriad tasks have been addressed in the field of abusive online content detection, reflecting the different disciplines, motivations and assumptions behind research. This has led to considerable variation in what is actually detected under the rubric of `abusive content', and establishing a degree of order over the diverse categorisations and subcategorisations is both difficult and somewhat arbitrary. Key dimensions which dataset creators have used to categorise detection tasks include who/what is targeted (e.g. groups vs. individuals), the strength of content (e.g. covert vs. overt), the nature of the abuse (e.g. benevolent vs. hostile sexism BIBREF31), how the abuse manifests (e.g. threats vs. derogatory statements), the tone (e.g. aggressive vs. non-aggressive), the specific target (e.g. ethnic minorities vs. women),and the subjective perception of the reader (e.g. disrespectful vs. respectful). Other important dimensions include the theme used to express abuse (e.g. Islamophobia which relies on tropes about terrorism vs. tropes about sexism) and the use of particular linguistic devices, such as appeals to authority, sincerity and irony. All of these dimensions can be combined in different ways, producing a large number of intersecting tasks. Consistency in how tasks are described will not necessarily ensure that datasets can be used interchangeably. From the description of a task, an annotation framework must be developed which converts the conceptualisation of abuse into a set of standards. This formalised representation of the `abuse' inevitably involves shortcuts, imperfect rules and simplifications. If annotation frameworks are developed and applied differently, then even datasets aimed at the same task can still vary considerably. Nonetheless, how detection tasks for online abuse are described is crucial for how the datasets – and in turn the systems trained on them – can subsequently be used. For example, a dataset annotated for hate speech can be used to examine bigoted biases, but the reverse is not true. How datasets are framed also impacts whether, and how, datasets can be combined to form large `mega-datasets' – a potentially promising avenue for overcoming data sparsity BIBREF17. In the remainder of this section, we provide a framework for splitting out detection tasks along the two most salient dimensions: (1) the nature of abuse and (2) the granularity of the taxonomy. <<<Detection tasks: the nature of abuse>>> This refers to what is targeted/attacked by the content and, subsequently, how the taxonomy has been designed/framed by the dataset creators. The most well-established taxonomic distinction in this regard is the difference between (i) the detection of interpersonal abuse, and (ii) the detection of group-directed abuse BIBREF11). Other authors have sought to deductively theorise additional categories, such as `concept-directed' abuse, although these have not been widely adopted BIBREF13. Through an inductive investigation of existing training datasets, we extend this binary distinction to four primary categories of abuse which have been studied in previous work, as well as a fifth `Mixed' category. Person-directed abuse. Content which directs negativity against individuals, typically through aggression, insults, intimidation, hostility and trolling, amongst other tactics. Most research falls under the auspices of `cyber bullying', `harassment' and `trolling' BIBREF23, BIBREF32, BIBREF33. One major dataset of English Wikipedia editor comments BIBREF29 focuses on the `personal attack' element of harassment, drawing on prior investigations that mapped out harassment in that community. Another widely used dataset focuses on trolls' intent to intimidate, distinguishing between direct harassment and other behaviours BIBREF34. An important consideration in studies of person-directed abuse is (a) interpersonal relations, such as whether individuals engage in patterns of abuse or one-off acts and whether they are known to each other in the `real' world (both of which are a key concern in studies of cyberbullying) and (b) standpoint, such as whether individuals directly engage in abuse themselves or encourage others to do so. For example, the theoretically sophisticated synthetic dataset provided by BIBREF33 identifies not only harassment but also encouragement to harassment. BIBREF22 mark up posts from computer game forums (World of Warcraft and League of Legends) for cyberbullying and annotate these as $\langle $offender, victim, message$\rangle $ tuples. Group-directed abuse. Content which directs negativity against a social identity, which is defined in relation to a particular attribute (e.g. ethnic, racial, religious groups)BIBREF35. Such abuse is often directed against marginalised or under-represented groups in society. Group-directed abuse is typically described as `hate speech' and includes use of dehumanising language, making derogatory, demonising or hostile statements, making threats, and inciting others to engage in violence, amongst other dangerous communications. Common examples of group-directed abuse include sexism, which is included in datasets provided by BIBREF36, BIBREF37, BIBREF38, BIBREF39, BIBREF33 and racism, which is directly targeted in BIBREF36, BIBREF40. In some cases, specific types of group-directed abuse are subsumed within a broader category of identity-directed abuse, as in BIBREF41, BIBREF42, BIBREF4. Determining the limits of any group-directed abuse category requires careful theoretical reflection, as with the decision to include ethnic, caste-based and certain religious prejudices under `racism'. There is no `right' answer to such questions as they engage with ontological concerns about identification and `being' and the politics of categorization. Flagged content. Content which is reported by community members or assessed by community and professional content moderators. This covers a broad range of focuses as moderators may also remove spam, sexually inappropriate content and other undesirable contributions. In this regard, `flagged' content is akin to the concept of `trolling', which covers a wide range of behaviours, from jokes and playful interventions through to sinister personal attacks such as doxxing BIBREF43. Some forms of trolling can be measured with tools such as the Global Assessment of Internet Trolling (GAIT) BIBREF43. Incivility. Content which is considered to be incivil, rude, inappropriate, offensive or disrespectful BIBREF24, BIBREF25, BIBREF44. Such categories are usually defined with reference to the tone that the author adopts rather than the substantive content of what they express, which is the basis of person- and group- directed categories. Such content usually contains obscene, profane or otherwise `dirty' words. This can be easier to detect as closed-class lists are effective at identifying single objectionable words (e.g. BIBREF45). However, one concern with this type of research is that the presence of `dirty' words does not necessarily signal malicious intent or abuse; they may equally be used as intensifiers or colloquialisms BIBREF46. At the same time, detecting incivility can be more difficult as it requires annotators to infer the subjective intent of the speaker or to understand (or guess) the social norms of a setting and thus whether disrespect has been expressed BIBREF42. Content can be incivil without directing hate against a group or person, and can be inappropriate in one setting but not another: as such it tends to be more subjective and contextual than other types of abusive language. Mixed. Content which contains multiple types of abuse, usually a combination of the four categories discussed above. The intersecting nature of online language means that this is common but can also manifest in unexpected ways. For instance, female politicians may receive more interpersonal abuse than other politicians. This might not appear as misogyny because their identity as women is not referenced – but it might have motivated the abuse they were subjected to. Mixed forms of abuse require further research, and have thus far been most fully explored in the OLID dataset provided by BIBREF4, who explore several facets of abuse under one taxonomy. <<</Detection tasks: the nature of abuse>>> <<<Detection tasks: Granularity of taxonomies>>> This refers to how much detail a taxonomy contains, reflected in the number of unique classes. The most important and widespread distinction is whether a binary class is used (e.g. Hate / Not) or a multi-level class, such as a tripartite split (typically, Overt, Covert and Non-abusive). In some cases, a large number of complex classes are created, such as by combining whether the abuse is targeted or not along with its theme and strength. In general, Social scientific analyses encourage creating a detailed taxonomy with a large number of fine-grained categories. However, this is only useful for machine learning if there are enough data points in each category and if annotators are capable of consistently distinguishing between them. Complex annotation schemas may not result in better training datasets if they are not implemented in a robust way. As such, it is unsurprising that binary classification schemas are the most prevalent, even though they are arguably the least useful given the variety of ways in which abuse can be articulated. This can range from the explicit and overt (e.g. directing threats against a group) to more subtle behaviours, such as micro-aggressions and dismissing marginalised groups' experiences of prejudice. Subsuming both types of behaviour within one category not only risks making detection difficult (due to considerable in-class variation) but also leads to a detection system which cannot make important distinctions between qualitatively different types of content. This has severe implications for whether detection systems trained on such datasets can actually be used for downstream tasks, such as content moderation and social scientific analysis. Drawing together the nature and granularity of abuse, our analyses identify a hierarchy of taxonomic granularity from least to most granular: Binary classification of a single `meta' category, such as hate/not or abuse/not. This can lead to very general and vague research, which is difficult to apply in practice. Binary classification of a single type of abuse, such as person-directed or group-directed. This can be problematic given that abuse is nearly always directed against a group rather than `groups' per se. Binary classification of abuse against a single well-defined group, such as racism/not or Islamophobia/not, or interpersonal abuse against a well-defined cohort, such as MPs and young people. Multi-class (or multi-label) classification of different types of abuse, such as: Multiple targets (e.g. racist, sexist and non-hateful content) or Multiple strengths (e.g. none, implicit and explicit content). Multiple types (e.g. threats versus derogatory statements or benevolent versus hostile statements). Multi-class classification of different types of abuse which is integrated with other dimensions of abuse. <<</Detection tasks: Granularity of taxonomies>>> <<</Uses of datasets: How detection tasks are defined>>> <<</The purpose of training datasets>>> <<<The content of training datasets>>> <<<The `Level' of content>>> 49 of the training datasets are annotated at the level of the post, one dataset is annotated at the level of the user BIBREF47, and none of them are annotated at the level of the comment thread. Only two publications indicate that the entire conversational thread was presented to annotators when marking up individual entries, meaning that in most cases this important contextual information is not used. 49 of the training datasets contain only text. This is a considerable limitation of existing research BIBREF13, especially given the multimodal nature of online communication and the increasing ubiquity of digital-specific image-based forms of communication such as Memes, Gifs, Filters and Snaps BIBREF48. Although some work has addressed the task of detecting hateful images BIBREF49, BIBREF50, this lead to the creation of a publically available labelled training dataset in only one case BIBREF51. To our knowledge, no research has tackled the problem of detecting hateful audio content. This is a distinct challenge; alongside the semantic content audio also contains important vocal cues which provide more opportunities to investigate (but also potentially misinterpret) tone and intention. <<</The `Level' of content>>> <<<Language>>> The most common language in the training datasets is English, which appears in 20 datasets, followed by Arabic and Italian (5 datasets each), Hindi-English (4 datasets) and then German, Indonesian and Spanish (3 datasets). Noticeably, several major languages, both globally and in Europe, do not appear, which suggests considerable unevenness in the linguistic and cultural focuses of abusive language detection. For instance, there are major gaps in the coverage of European languages, including Danish and Dutch. Surprisingly, French only appears once. The dominance of English may be due to how we sampled publications (for which we used English terms), but may also reflect different publishing practices in different countries and how well-developed abusive content research is. <<</Language>>> <<<Source of data>>> Training datasets use data collected from a range of online spaces, including from mainstream platforms, such as Twitter, Wikipedia and Facebook, to more niche forums, such as World of Warcraft and Stormfront. In most cases, data is collected from public sources and then manually annotated but in others data is sourced through proprietary data sharing agreements with host platforms. Unsurprisingly, Twitter is the most widely used source of data, accounting for 27 of the datasets. This reflects wider concerns in computational social research that Twitter is over-used, primarily because it has a very accessible API for data collection BIBREF52, BIBREF53. Facebook and Wikipedia are the second most used sources of data, accounting for three datasets each – although we note that all three Wikipedia datasets are reported in the same publication. Many of the most widely used online platforms are not represented at all, or only in one dataset, such as Reddit, Weibo, VK and YouTube. The lack of diversity in where data is collected from limits the development of detection systems. Three main issues emerge: Linguistic practices vary across platforms. Twitter only allows 280 characters (previously only 140), provoking stylistic changes BIBREF54, and abusive content detection systems trained on this data are unlikely to work as well with longer pieces of text. Dealing with longer pieces of text could necessitate different classification systems, potentially affecting the choice of algorithmic architecture. Additionally, the technical affordances of platforms may affect the style, tone and topic of the content they host. The demographics of users on different platforms vary considerably. Social science research indicates that `digital divides' exist, whereby online users are not representative of wider populations and differ across different online spaces BIBREF53, BIBREF55, BIBREF56. Blank draws attention to how Twitter users are usually younger and wealthier than offline populations; over reliance on data from Twitter means, in effect, that we are over-sampling data from this privileged section of society. Blank also shows that there are also important cross-national differences: British Twitters are better-educated than the offline British population but the same is not true for American Twitter users compared with the offline American population BIBREF56. These demographic differences are likely to affect the types of content that users produce. Platforms have different norms and so host different types and amounts of abuse. Mainstream platforms have made efforts in recent times to `clean up' content and so the most overt and aggressive forms of abuse, such as direct threats, are likely to be taken down BIBREF57. However, more niche platforms, such as Gab or 4chan, tolerate more offensive forms of speech and are more likely to contain explicit abuse, such as racism and very intrusive forms of harassment, such as `doxxing' BIBREF58, BIBREF59, BIBREF60. Over-reliance on a few sources of data could mean that datasets are biased towards only a subset of types of abuse. <<</Source of data>>> <<<Size>>> The size of the training datasets varies considerably from 469 posts to 17 million; a difference of four orders of magnitude. Differences in size partly reflect different annotation approaches. The largest datasets are from proprietary data sharing agreements with platforms. Smaller datasets tend to be carefully collected and then manually annotated. There are no established guidelines for how large an abusive language training dataset needs to be. However, smaller datasets are problematic because they contain too little linguistic variation and increase the likelihood of overfitting. Rizoiu et al.BIBREF61 train detection models on only a proportion of the Davidson et al. and Waseem training datasets and show that this leads to worse performance, with a lower F1-Score, particularly for `data hungry' deep learning approaches BIBREF61. At the same time, `big' datasets alone are not a panacea for the challenges of abusive content classification. Large training datasets which have been poorly sampled, annotated with theoretically problematic categories or inexpertly and unthoughtfully annotated, could still lead to the development of poor classification systems. The challenges posed by small datasets could potentially be overcome through machine learning techniques such as `semi-supervised' and `active' learning BIBREF62, although these have only been limitedly applied to abusive content detection so far BIBREF63. Sharifirad et al. propose using text augmentation and new text generation as a way of overcoming small datasets, which is a promising avenue for future research BIBREF64. <<</Size>>> <<<Class distribution and sampling>>> Class distribution is an important, although often under-considered, aspect of the design of training datasets. Datasets with little abusive content will lack linguistic variation in terms of what is abusive, thereby increasing the risk of overfitting. More concerningly, the class distribution directly affects the nature of the engineering task and how performance should be evaluated. For instance, if a dataset is 70% hate speech then a zero-rule classification system (i.e. where everything is categorised as hate speech) will achieve 70% precision and 100% recall. This should be used as a baseline for evaluating performance: 80% precision is less impressive compared with this baseline. However, 80% precision on an evenly balanced dataset would be impressive. This is particularly important when evaluating the performance of ternary classifiers, when classes can be considerably imbalanced. On average, 35% of the content in the training datasets is abusive. However, class distributions vary considerably, from those with just 1% abusive content up to 100%. These differences are largely a product of how data is sampled and which platform it is taken from. Bretschneider BIBREF22 created two datasets without using purposive sampling, and as such they contain very low levels of abuse ( 1%). Other studies filter data collection based on platforms, time periods, keywords/hashtags and individuals to increase the prevalence of abuse. Four datasets comprise only abusive content; three cases are synthetic datasets, reported on in one publication BIBREF65, and in the other case the dataset is an amendment to an existing dataset and only contains misogynistic content BIBREF37. Purposive sampling has been criticised for introducing various forms of bias into datasets BIBREF66, such as missing out on mis-spelled content BIBREF67 and only focusing on the linguistic patterns of an atypical subset of users. One pressing risk is that a lot of data is sampled from far right communities – which means that most hate speech classifiers implicitly pick up on right wing styles of discourse rather than hate speech per se. This could have profound consequences for our understanding of online political dialogue if the classifiers are applied uncritically to other groups. Nevertheless, purposive sampling is arguably a necessary step when creating a training dataset given the low prevalence of abuse on social media in general BIBREF68. <<</Class distribution and sampling>>> <<<Identity of the content creators>>> The identity of the users who originally created the content in training datasets is described in only two cases. In both cases the data is synthetic BIBREF65, BIBREF33. Chung et al. use `nichesourcing' to synthetically generate abuse, with experts in tackling hate speech creating hateful posts. Sprugnoli et al. ask children to adopt pre-defined roles in an experimental classroom setup, and ask them to engage in a cyberbullying scenario. In most of the non-synthetic training datasets, some information is given about the sampling criteria used to collect data, such as hashtags. However, this does not provide direct insight into who the content creators are, such as their identity, demographics, online behavioural patterns and affiliations. Providing more information about content creators may help address biases in existing datasets. For instance, Wiegand et al. show that 70% of the sexist tweets in the highly cited Waseem and Hovy dataset BIBREF36 come from two content creators and that 99% of the racist tweets come from just one BIBREF66. This is a serious constraint as it means that user-level metadata is artificially highly predictive of abuse. And, even when user-level metadata is not explicitly modelled, detection systems only need to pick up on the linguistic patterns of a few authors to nominally detect abuse. Overall, the complete lack of information about which users have created the content in most training datasets is a substantial limitation which may be driving as-yet-unrecognised biases. This can be remedied through the methodological rigour implicit in including a data statement with a corpus. <<</Identity of the content creators>>> <<</The content of training datasets>>> <<<Annotation of training datasets>>> <<<Annotation process>>> How training datasets are annotated is one of the most important aspects of their creation. A range of annotation processes are used in training datasets, which we split into five high-level categories: Crowdsourcing (15 datasets). Crowdsourcing is widely used in NLP research because it is relatively cheap and easy to implement. The value of crowdsourcing lies in having annotations undertaken by `a large number of non-experts' (BIBREF69, p. 278) – any bit of content can be annotated by multiple annotators, effectively trading quality for quantity. Studies which use crowdsourcing with only a few annotators for each bit of content risk minimising quality without counterbalancing it with greater quantity. Furthermore, testing the work of many different annotators can be challenging BIBREF70, BIBREF71 and ensuring they are paid an ethical amount may make the cost comparable to using trained experts. Crowdsourcing has also been associated with `citizen science' initiatives to make academic research more accessible but this may not be fully realised in cases where annotation tasks are laborious and low-skilled BIBREF72, BIBREF20. Academic experts (22 datasets). Expert annotation is time-intensive but is considered to produce higher quality annotations. Waseem reports that `systems trained on expert annotations outperform systems trained on amateur annotations.' BIBREF73 and, similarly, D'Orazio et al. claim, `although expert coding is costly, it produces quality data.' BIBREF74. However, the notion of an `expert' remains somewhat fuzzy within abusive content detection research. In many cases, publications only report that `an expert' is used, without specifying the nature of their expertise – even though this can vary substantially. For example, an expert may refer to an NLP practitioner, an undergraduate student with only modest levels of training, a member of an attacked social group relevant to the dataset or a researcher with a doctorate in the study of prejudice. In general, we anticipate that experts in the social scientific study of prejudice/abuse would perform better at annotation tasks then NLP experts who may not have any direct expertise in the conceptual and theoretical issues of abusive content annotation. In particular, one risk of using NLP practitioners, whether students or professionals, is that they might `game' training datasets based on what they anticipate is technically feasible for existing detection systems. For instance, if existing systems perform poorly when presented with long range dependencies, humour or subtle forms of hate (which are nonetheless usually discernible to human readers) then NLP experts could unintentionally use this expectation to inform their annotations and not label such content as hateful. Professional moderators (3 datasets). Professional moderators offer a standardized approach to content annotated, implemented by experienced workers. This should, in principle, result in high quality annotations. However, one concern is that moderators are output-focused as their work involves determining whether content should be allowed or removed from platforms; they may not provide detailed labels about the nature of abuse and may also set the bar for content labelled `abusive' fairly high, missing out on more nuance and subtle varieties. In most cases, moderators will annotate for a range of unacceptable content, such as spam and sexual content, and this must be marked in datasets. A mix of crowdsourcing and experts (6 datasets). Synthetic data creation (4 datasets). Synthetic datasets are an interesting option as they are inherently non-authentic and therefore not necessarily representative of how abuse manifests in real-world situations. However, if they are created in realistic conditions by experts or relevant content creators then they can mimic real behaviour and have the added advantage that they may have broader coverage of different types of abuse. They are also usually easier to share. <<</Annotation process>>> <<<Identity of the annotators>>> The data statements framework given by Bender and Friedman emphasises the importance of understanding who has completed annotations. Knowing who the annotators are is important because `their own “social address" influences their experience with language and thus their perception of what they are annotating.' BIBREF18 In the context of online abuse, Binns et al. show that the gender of annotators systematically influences what annotations they provide BIBREF75. No annotator will be well-versed in all of the slang or coded meanings used to construct abusive language. Indeed, many of these coded meanings are deliberately covert and obfuscated BIBREF76. To help mitigate these challenges, annotators should be (a) well-qualified and (b) diverse. A homogeneous group of annotators will be poorly equipped to catch all instances of abuse in a corpus. Recruiting an intentionally mixed groups of annotators is likely to yield better recall of abuse and thus a more precise dataset BIBREF77. Information about annotators is unfortunately scarce. In 23 of the training datasets no information is given about the identity of annotators; in 17 datasets very limited information is given, such as whether the annotator is a native speaker of the language; and in just 10 cases is detailed information given. Interestingly, only 4 out of these 10 datasets are in the English language. Relevant information about annotators can be split into (i) Demographic information and (ii) annotators' expertise and experience. In none of the training sets is the full range of annotator information made available, which includes: Demographic information. The nature of the task affects what information should be provided, as well as the geographic and cultural context. For instance, research on Islamophobia should include, at the very least, information about annotators' religious affiliation. Relevant variables include: Age Ethnicity and race Religion Gender Sexual Orientation Expertise and experience. Relevant variables include: Field of research Years of experience Research status (e.g. research assistant or post-doc) Personal experiences of abuse. In our review, none of the datasets contained systematic information about whether annotators had been personally targeted by abuse or had viewed such abuse online, even though this can impact annotators' perceptions. Relevant variables include: Experiences of being targeted by online abuse. Experiences of viewing online abuse. <<</Identity of the annotators>>> <<<Guidelines for annotation>>> A key source of variation across datasets is whether annotators were given detailed guidelines, very minimal guidelines or no guidelines at all. Analysing this issue is made difficult by the fact that many dataset creators do not share their annotation guidelines. 21 of the datasets we study do not provide the guidelines and 14 only provide them in a highly summarised form. In just 15 datasets is detailed information given (and these are reported on in just 9 publications). Requiring researchers to publish annotation guidelines not only helps future researchers to better understand what datasets contain but also to improve and extend them. This could be crucial for improving the quality of annotations; as Ross et al. recommend, `raters need more detailed instructions for annotation.' BIBREF78 The degree of detail given in guidelines is linked to how the notion of `abuse' is understood. Some dataset creators construct clear and explicit guidelines in an attempt to ensure that annotations are uniform and align closely with social scientific concepts. In other cases, dataset creators allow annotators to apply their own perception. For instance, in their Portuguese language dataset, Fortuna et al. ask annotators to `evaluate if according to your opinion, these tweets contain hate speech' BIBREF38. The risk here is that authors' perceptions may differ considerably; Salminen et al. show that online hate interpretation varies considerably across individuals BIBREF79. This is also reflected in inter-annotator agreement scores for abusive content, which is often very low, particularly for tasks which deploy more than just a binary taxonomy. However, it is unlikely that annotators could ever truly divorce themselves from their own social experience and background to decide on a single `objective' annotation. Abusive content annotation is better understood, epistemologically, as an intersubjective process in which agreement is constructed, rather than an objective process in which a `true' annotation is `found'. For this reason, some researchers have shifted the question of `how can we achieve the correct annotation?' to `who should decide what the correct annotation is?' BIBREF73. Ultimately, whether annotators should be allowed greater freedom in making annotations, and whether this results in higher quality datasets, needs further research and conceptual examination. Some aspects of abusive language present fundamental issues that are prone to unreliable annotation, such as Irony, Calumniation and Intent. They are intrinsically difficult to annotate given a third-person perspective on a piece of text as they involve making a judgement about indeterminate issues. However, they cannot be ignored given their prevalence in abusive content and their importance to how abuse is expressed. Thus, although they are fundamentally conceptual problems, these issues also present practical problems for annotators, and should be addressed explicitly in coding guidelines. Otherwise, as BIBREF80 note, these issues are likely to drive type II errors in classification, i.e. labelling non-hate-speech utterances as hate speech. <<<Irony>>> This covers statements that have a meaning contrary to that one might glean at first reading. Lachenicht BIBREF81 notes that Irony goes against Grice's quality maxim, and as such Ironic content requires closer attention from the reader as it is prone to being misinterpreted. Irony is a particularly difficult issue as in some cases it is primarily intended to provide humour (and thus might legitimately be considered non-abusive) but in other cases is used as a way of veiling genuine abuse. Previous research suggests that the problem is widespread. Sanguinetti et al. BIBREF82 find irony in 11% of hateful tweets in Italian. BIBREF25 find that irony is one of the most common phenomena in self-deleted comments; and that the prevalence of irony is 33.9% amongst deleted comments in a Croatian comment dataset and 18.1% amongst deleted comments in a Slovene comment dataset. Furthermore, annotating irony (as well as related constructs, such as sarcasm and humour) is inherently difficult. BIBREF83 report that agreement on sarcasm amongst annotators working in English is low, something echoed by annotations of Danish content BIBREF84. Irony is also one of the most common reasons for content to be re-moderated on appeal, according to Pavlopoulos et al. BIBREF24. <<</Irony>>> <<<Calumniation>>> This covers false statements, slander, and libel. From the surveyed set, this is annotated in datasets for Greek BIBREF24 and for Croatian and Slovene BIBREF25. Its prevalence varies considerably across these two datasets and reliable estimations of the prevalence of false statements are not available. Calumniation is not only an empirical issue, it also raises conceptual problems: should false information be considered abusive if it slanders or demeans a person? However, if the information is then found out to be true does it make the content any less abusive? Given the contentiousness of `objectivity', and the lack of consensus about most issues in a `post-truth' age BIBREF85, who should decide what is considered true? And, finally, how do we determine whether the content creator knows whether something is true? These ontological, epistemological and social questions are fundamental to the issue of truth and falsity in abusive language. Understandably, most datasets do not taken any perspective on the truth and falsity of content. This is a practical solution: given error rates in abusive language detection as well as error rates in fact-checking, a system which combined both could be inapplicable in practice. <<</Calumniation>>> <<<Intent>>> This information about the utterer's state of mind is a core part of how many types of abusive language are defined. Intent is usually used to emphasize the wrongness of abusive behaviour, such as spreading, inciting, promoting or justifying hatred or violence towards a given target, or sending a message that aims at dehumanising, delegitimising, hurting or intimidating them BIBREF82. BIBREF81 postulate that “aggravation, invective and rudeness ... may be performed with varying degrees of intention to hurt", and cite five legal degrees of intent BIBREF86. However, it is difficult to discern the intent of another speaker in a verbal conversation between humans, and even more difficult to do so through written and computer-mediated communications BIBREF87. Nevertheless, intent is particularly important for some categories of abuse such as bullying, maliciousness and hostility BIBREF34, BIBREF32. Most of the guidelines for the datasets we have studied do not contain an explicit discussion of intent, although there are exceptions. BIBREF88 include intent as a core part of their annotation standard, noting that understanding context (such as by seeing a speakers' other online messages) is crucial to achieving quality annotations. However, this proposition poses conceptual challenges given that people's intent can shift over time. Deleted comments have been used to study potential expressions of regret by users and, as such, a change in their intent BIBREF89, BIBREF25; this has also been reported as a common motivator even in self-deletion of non-abusive language BIBREF90. Equally, engaging in a sequence of targeted abusive language is an indicator of aggressive intent, and appears in several definitions. BIBREF23 require an “intent to physically assert power over women" as a requirement for multiple categories of misogynistic behaviour. BIBREF34 find that messages that are “unapologetically or intentionally offensive" fit in the highest grade of trolling under their schema. Kenny et al. BIBREF86 note how sarcasm, irony, and humour complicate the picture of intent by introducing considerable difficulties in discerning the true intent of speakers (as discussed above). Part of the challenge is that many abusive terms, such as slurs and insults, are polysemic and may be co-opted by an ingroup into terms of entertainment and endearment BIBREF34. <<</Intent>>> <<</Guidelines for annotation>>> <<</Annotation of training datasets>>> <<</Analysis of training datasets>>> <<<Dataset sharing>>> <<<The challenges and opportunities of achieving Open Science>>> All of the training datasets we analyse are publicly accessible and as such can be used by researchers other than the authors of the original publication. Sharing data is an important aspect of open science but also poses ethical and legal risks, especially in light of recent regulatory changes, such as the introduction of GPDR in the UK BIBREF91, BIBREF92. This problem is particularly acute with abusive content, which can be deeply shocking, and some training datasets from highly cited publications have not been made publicly available BIBREF93, BIBREF94, BIBREF95. Open science initiatives can also raise concerns amongst the public, who may not be comfortable with researchers sharing their personal data BIBREF96, BIBREF97. The difficulty of sharing data in sensitive areas of research is reflected by the Islamist extremism research website, `Jihadology'. It chose to restrict public access in 2019, following efforts by Home Office counter-terrorism officials to shut it down completely. They were concerned that, whilst it aimed to support academic research into Islamist extremism, it may have inadvertently enabled individuals to radicalise by making otherwise banned extremist material available. By working with partners such as the not-for-profit Tech Against Terrorism, Jihadology created a secure area in the website, which can only be accessed by approved researchers. Some of the training datasets in our list have similar requirements, and can only be accessed following a registration process. Open sharing of datasets is not only a question of scientific integrity and a powerful way of advancing scientific knowledge. It is also, fundamentally, a question of fairness and power. Opening access to datasets will enable less-well funded researchers and organisations, which includes researchers in the Global South and those working for not-for-profit organisations, to steer and contribute to research. This is a particularly pressing issue in a field which is directly concerned with the experiences of often-marginalised communities and actors BIBREF36. For instance, one growing concern is the biases encoded in detection systems and the impact this could have when they are applied in real-world settings BIBREF9, BIBREF10. This research could be further advanced by making more datasets and detection systems more easily available. For instance, Binns et al. use the detailed metadata in the datasets provided by Wulczyn et al. to investigate how the demographics of annotators impacts the annotations they make BIBREF75, BIBREF29. The value of such insights is only clear after the dataset has been shared – and, equally, is only possible because of data sharing. More effective ways of sharing datasets would address the fact that datasets often deteriorate after they have been published BIBREF13. Several of the most widely used datasets provide only the annotations and IDs and must be `rehydrated' to collect the content. Both of the datasets provided by Waseem and Hovy and Founta et al. must be collected in this way BIBREF98, BIBREF36, and both have degraded considerably since they were first released as the tweets are no longer available on Twitter. Chung et al. also estimate that within 12 months the recently released dataset for counterspeech by Mathew et al. had lost more than 60% of its content BIBREF65, BIBREF58. Dataset degradation poses three main risks: First, if less data is available then there is a greater likelihood of overfitting. Second, the class distributions usually change as proportionally more of the abusive content is taken down than the non-abusive. Third, it is also likely that the more overt forms of abuse are taken down, rather than the covert instances, thereby changing the qualitative nature of the dataset. <<</The challenges and opportunities of achieving Open Science>>> <<<Research infrastructure: Solutions for sharing training datasets>>> The problem of data access and sharing remains unresolved in the field of abusive content detection, much like other areas of computational research BIBREF99. At present, an ethical, secure and easy way of sharing sensitive tools and resources has not been developed and adopted in the field. More effective dataset sharing would (1) greater collaboration amongst researchers, (2) enhance the reproducibility of research by encouraging greater scrutiny BIBREF100, BIBREF101, BIBREF102 and (3) substantively advance the field by enabling future researchers to better understand the biases and limitations of existing research and to identify new research directions. There are two main challenges which must be overcome to ensure that training datasets can be shared and used by future researchers. First, dataset quality: the size, class distribution and quality of their content must be maintained. Second, dataset access: access to datasets must be controlled so that researchers can use them, whilst respecting platforms' Terms of Service and not allowing potential extremists from having access. These problems are closely entwined and the solutions available, which follow, have implications for both of them. Synthetic datasets. Four of the datasets we have reviewed were developed synthetically. This resolves the dataset quality problem but introduces additional biases and limitations because the data is not real. Synthetic datasets still need to be shared in such a way as to limit access for potential extremists but face no challenges from Platforms' Terms of Services. Data `philanthropy' or `donations'. These are defined as `the act of an individual actively consenting to donate their personal data for research' BIBREF97. Donated data from many individuals could then be combined and shared – but it would still need to be annotated. A further challenge is that many individuals who share abusive content may be unwilling to `donate' their data as this is commonly associated with prosocial motivations, creating severe class imbalances BIBREF97. Data donations could also open new moral and ethical issues; individuals' privacy could be impacted if data is re-analysed to derive new unexpected insights BIBREF103. Informed consent is difficult given that the exact nature of analyses may not be known in advance. Finally, data donations alone do not solve how access can be responsibly protected and how platforms' Terms of Service can be met. For these reasons, data donations are unlikely to be a key part of future research infrastructure for abusive content detection. Platform-backed sharing. Platforms could share datasets and support researchers' access. There are no working examples of this in abusive content detection research, but it has been successfully used in other research areas. For instance, Twitter has made a large dataset of accounts linked to potential information operations, known as the “IRA" dataset (Internet Research Agency). This would require considerably more interfaces between academia and industry, which may be difficult given the challenges associated with existing initiatives, such as Social Science One. However, in the long term, we propose that this is the most effective solution for the problem of sharing training datasets. Not only because it removes Terms of Service limitations but also because platforms have large volumes of original content which has been annotated in a detailed way. This could take one of two forms: platforms either make content which has violated their Community Guidelines available directly or they provide special access post-hoc to datasets which researchers have collected publicly through their API - thereby making sure that datasets do not degrade over time. Data trusts. Data trusts have been described as a way of sharing data `in a fair, safe and equitable way' ( BIBREF104 p. 46). However, there is considerable disagreement as to what they entail and how they would operate in practice BIBREF105. The Open Data Institute identifies that data trusts aim to make data open and accessible by providing a framework for storing and accessing data, terms and mechanisms for resolving disputes and, in some cases, contracts to enforce them. For abusive content training datasets, this would provide a way of enabling datasets to be shared, although it would require considerable institutional, legal and financial commitments. Arguably, the easiest way of ensuring data can be shared is to maintain a very simple data trust, such as a database, which would contain all available abusive content training datasets. This repository would need to be permissioned and access controlled to address concerns relating to privacy and ethics. Such a repository could substantially reduce the burden on researchers; once they have been approved to the repository, they could access all datasets publicly available – different levels of permission could be implemented for different datasets, depending on commercial or research sensitivity. Furthermore, this repository could contain all of the metadata reported with datasets and such information could be included at the point of deposit, based on the `data statements' work of Bender and Friedman BIBREF18. A simple API could be developed for depositing and reading data, similar to that of the HateBase. The permissioning system could be maintained either through a single institution or, to avoid power concentrating amongst a small group of researchers, through a decentralised blockchain. <<</Research infrastructure: Solutions for sharing training datasets>>> <<<A new repository of training datasets: Hatespeechdata.com>>> The resources and infrastructure to create a dedicated data trust and API for sharing abusive content training datasets is substantial and requires considerable further engagement with research teams in this field. In the interim, to encourage greater sharing of datasets, we have launched a dedicated website which contains all of the datasets analysed here: https://hatespeechdata.com. Based on the analysis in the previous sections, we have also provided partial data statements BIBREF18. The website also contains previously published abusive keyword dictionaries, which are not analysed here but some researchers may find useful. Note that the website only contains information/data which the original authors have already made publicly available elsewhere. It will be updated with new datasets in the future. <<</A new repository of training datasets: Hatespeechdata.com>>> <<</Dataset sharing>>> <<<Best Practices for training dataset creation>>> Much can be learned from existing efforts to create abusive language datasets. We identify best practices which emerge at four distinct points in the process of creating a training dataset: (1) task formation, (2) data selection, (3) annotation, and (4) documentation. <<<Task formation: Defining the task addressed by the dataset>>> Dataset creation should be `problem driven' BIBREF106 and should address a well-defined and specific task, with a clear motivation. This will directly inform the taxonomy design, which should be well-specified and engage with social scientific theory as needed. Defining a clear task which the dataset addresses is especially important given the maturation of the field, ongoing terminological disagreement and the complexity of online abuse. The diversity of phenomena that fits under the umbrella of abusive language means that `general purpose' datasets are unlikely to advance the field. New datasets are most valuable when they address a new target, generator, phenomenon, or domain. Creating datasets which repeat existing work is not nearly as valuable. <<</Task formation: Defining the task addressed by the dataset>>> <<<Selecting data for abusive language annotation>>> Once the task is established, dataset creators should select what language will be annotated, where data will be sampled from and how sampling will be completed. Any data selection exercise is bound to give bias, and so it is important to record what decisions are made (and why) in this step. Dataset builders should have a specific target size in mind and also have an idea of the minimum amount of data this is likely to be needed for the task. This is also where steps 1 and 2 intersect: the data selection should be driven by the problem that is addressed rather than what is easy to collect. Ensuring there are enough positive examples of abuse will always be challenging as the prevalence of abuse is so low. However, given that purposive sampling inevitably introduces biases, creators should explore a range of options before determining the best one – and consider using multiple sampling methods at once, such as including data from different times, different locations, different types of users and different platforms. Other options include using measures of linguistic diversity to maximize the variety of text included in datasets, or including words that cluster close to known abusive terms. <<</Selecting data for abusive language annotation>>> <<<Annotating abusive language>>> Annotators must be hired, trained and given appropriate guidelines. Annotators work best with solid guidelines, that are easy to grasp and have clear examples BIBREF107. The best examples are both illustrative, in order to capture the concepts (such as `threatening language') and provide insight into `edge cases', which is content that only just crosses the line into abuse. Decisions should be made about how to handle intrinsically difficult aspects of abuse, such as irony, calumniation and intent (see above). Annotation guidelines should be developed iteratively by dataset creators; by working through the data, rules can be established for difficult or counter-intuitive coding decisions, and a set of shared practices developed. Annotators should be included in this iterative process. Discussions with annotators the language that they have seen “in the field" offers an opportunity to enhance and refine guidelines - and even taxonomies. Such discussions will lead to more consistent data and provide a knowledge base to draw on for future work. To achieve this, it is important to adopt an open culture where annotators are comfortable providing open feedback and also describing their uncertainties. Annotators should also be given emotional and practical support (as well as appropriate financial compensation), and the harmful and potentially triggering effects of annotating online abuse should be recognised at all times. For a set of guidelines to help protect the well-being of annotators, see BIBREF13. <<</Annotating abusive language>>> <<<Documenting methods, data, and annotators>>> The best training datasets provide as much information as possible and are well-documented. When the method behind them is unclear, they are hard to evaluate, use and build on. Providing as much information as possible can open new and unanticipated analyses and gives more agency to future researchers who use the dataset to create classifiers. For instance, if all annotators' codings are provided (rather than just the `final' decision) then a more nuanced and aware classifier could be developed as, in some cases, it can be better to maximise recall of annotations rather than maximise agreement BIBREF77. Our review found that most datasets have poor methodological descriptions and few (if any) provide enough information to construct an adequate data statement. It is crucial that dataset creators are up front about their biases and limitations: every dataset is biased, and this is only problematic when the biases are unknown. One strategy for doing this is to maintain a document of decisions made when designing and creating the dataset and to then use it to describe to readers the rationale behind decisions. Details about the end-to-end dataset creation process are welcomed. For instance, if the task is crowdsourced then a screenshot of the micro-task presented to workers should be included, and the top-level parameters should be described (e.g. number of workers, maximum number of tasks per worker, number of annotations per piece of text) BIBREF20. If a dedicated interface is used for the annotation, this should also be described and screenshotted as the interface design can influence the annotations. <<</Documenting methods, data, and annotators>>> <<<Best practice summary>>> Unfortunately, as with any burgeoning field, there is confusion and overlap around many of the phenomena discussed in this paper; coupled with the high degree of variation in the quality of method descriptions, it has lead to many pieces of research that are hard to combine, compare, or re-use. Our reflections on best practices are driven by this review and the difficulties of creating high quality training datasets. For future researchers, we summarise our recommendations in the following seven points: Bear in mind the purpose of the dataset; design the dataset to help address questions and problems from previous research. Avoid using `easy to access' data, and instead explore new sources which may have greater diversity. Consider what biases may be created by your sampling method. Determine size based on data sparsity and having enough positive classes rather than `what is possible'. Establish a clear taxonomy to be used for the task, with meaningful and theoretically sound categories. Provide annotators with guidelines; develop them iteratively and publish them with your dataset. Consider using trained annotators given the complexities of abusive content. Involve people who have direct experience of the abuse which you are studying whenever possible (and provided that you can protect their well-being). Report on every step of the research through a Data Statement. <<</Best practice summary>>> <<</Best Practices for training dataset creation>>> <<<Conclusion>>> This paper examined a large set of datasets for the creation of abusive content detection systems, providing insight into what they contain, how they are annotated, and how tasks have been framed. Based on an evidence-driven review, we provided an extended discussion of how to make training datasets more readily available and useful, including the challenges and opportunities of open science as well as the need for more research infrastructure. We reported on the development of hatespeechdata.com – a new repository for online abusive content training datasets. Finally, we outlined best practices for creation of training datasets for detection of online abuse. We have effectively met the four research aims elaborated at the start of the paper. Training detection systems for online abuse is a substantial challenge with real social consequences. If we want the systems we develop to be useable, scalable and with few biases then we need to train them on the right data: garbage in will only lead to garbage out. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nBackground\nAnalysis of training datasets\nThe purpose of training datasets\nProblems addressed by datasets\nUses of datasets: How detection tasks are defined\nDetection tasks: the nature of abuse\nDetection tasks: Granularity of taxonomies\nThe content of training datasets\nThe `Level' of content\nLanguage\nSource of data\nSize\nClass distribution and sampling\nIdentity of the content creators\nAnnotation of training datasets\nAnnotation process\nIdentity of the annotators\nGuidelines for annotation\nIrony\nCalumniation\nIntent\nDataset sharing\nThe challenges and opportunities of achieving Open Science\nResearch infrastructure: Solutions for sharing training datasets\nA new repository of training datasets: Hatespeechdata.com\nBest Practices for training dataset creation\nTask formation: Defining the task addressed by the dataset\nSelecting data for abusive language annotation\nAnnotating abusive language\nDocumenting methods, data, and annotators\nBest practice summary\nConclusion" ], "type": "outline" }
1911.02116
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Unsupervised Cross-lingual Representation Learning at Scale <<<Abstract>>> This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +13.8% average accuracy on XNLI, +12.3% average F1 score on MLQA, and +2.1% average F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 11.8% in XNLI accuracy for Swahili and 9.2% for Urdu over the previous XLM model. We also present a detailed empirical evaluation of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-Ris very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make XLM-R code, data, and models publicly available. <<</Abstract>>> <<<Introduction>>> The goal of this paper is to improve cross-lingual language understanding (XLU), by carefully studying the effects of training unsupervised cross-lingual representations at a very large scale. We present XLM-R, a transformer-based multilingual masked language model pre-trained on text in 100 languages, which obtains state-of-the-art performance on cross-lingual classification, sequence labeling and question answering. Multilingual masked language models (MLM) like mBERT BIBREF0 and XLM BIBREF1 have pushed the state-of-the-art on cross-lingual understanding tasks by jointly pretraining large Transformer models BIBREF2 on many languages. These models allow for effective cross-lingual transfer, as seen in a number of benchmarks including cross-lingual natural language inference BIBREF3, BIBREF4, BIBREF5, question answering BIBREF6, BIBREF7, and named entity recognition BIBREF8, BIBREF9. However, all of these studies pre-train on Wikipedia, which provides a relatively limited scale especially for lower resource languages. In this paper, we first present a comprehensive analysis of the trade-offs and limitations of multilingual language models at scale, inspired by recent monolingual scaling efforts BIBREF10. We measure the trade-off between high-resource and low-resource languages and the impact of language sampling and vocabulary size. The experiments expose a trade-off as we scale the number of languages for a fixed model capacity: more languages leads to better cross-lingual performance on low-resource languages up until a point, after which the overall performance on monolingual and cross-lingual benchmarks degrades. We refer to this tradeoff as the curse of multilinguality, and show that it can be alleviated by simply increasing model capacity. We argue, however, that this remains an important limitation for future XLU systems which may aim to improve performance with more modest computational budgets. Our best model XLM-RoBERTa (XLM-R) outperforms mBERT on cross-lingual classification by up to 21% accuracy on low-resource languages like Swahili and Urdu. It outperforms the previous state of the art by 3.9% average accuracy on XNLI, 2.1% average F1-score on Named Entity Recognition, and 8.4% average F1-score on cross-lingual Question Answering. We also evaluate monolingual fine tuning on the GLUE and XNLI benchmarks, where XLM-R obtains results competitive with state-of-the-art monolingual models, including RoBERTa BIBREF10. These results demonstrate, for the first time, that it is possible to have a single large model for all languages, without sacrificing per-language performance. We will make our code, models and data publicly available, with the hope that this will help research in multilingual NLP and low-resource language understanding. <<</Introduction>>> <<<Related Work>>> From pretrained word embeddings BIBREF11, BIBREF12 to pretrained contextualized representations BIBREF13, BIBREF14 and transformer based language models BIBREF15, BIBREF0, unsupervised representation learning has significantly improved the state of the art in natural language understanding. Parallel work on cross-lingual understanding BIBREF16, BIBREF14, BIBREF1 extends these systems to more languages and to the cross-lingual setting in which a model is learned in one language and applied in other languages. Most recently, BIBREF0 and BIBREF1 introduced mBERT and XLM - masked language models trained on multiple languages, without any cross-lingual supervision. BIBREF1 propose translation language modeling (TLM) as a way to leverage parallel data and obtain a new state of the art on the cross-lingual natural language inference (XNLI) benchmark BIBREF5. They further show strong improvements on unsupervised machine translation and pretraining for sequence generation. Separately, BIBREF8 demonstrated the effectiveness of multilingual models like mBERT on sequence labeling tasks. BIBREF17 showed gains over XLM using cross-lingual multi-task learning, and BIBREF18 demonstrated the efficiency of cross-lingual data augmentation for cross-lingual NLI. However, all of this work was at a relatively modest scale, in terms of the amount of training data, as compared to our approach. The benefits of scaling language model pretraining by increasing the size of the model as well as the training data has been extensively studied in the literature. For the monolingual case, BIBREF19 show how large-scale LSTM models can obtain much stronger performance on language modeling benchmarks when trained on billions of tokens. GPT BIBREF15 also highlights the importance of scaling the amount of data and RoBERTa BIBREF10 shows that training BERT longer on more data leads to significant boost in performance. Inspired by RoBERTa, we show that mBERT and XLM are undertuned, and that simple improvements in the learning procedure of unsupervised MLM leads to much better performance. We train on cleaned CommonCrawls BIBREF20, which increase the amount of data for low-resource languages by two orders of magnitude on average. Similar data has also been shown to be effective for learning high quality word embeddings in multiple languages BIBREF21. Several efforts have trained massively multilingual machine translation models from large parallel corpora. They uncover the high and low resource trade-off and the problem of capacity dilution BIBREF22, BIBREF23. The work most similar to ours is BIBREF24, which trains a single model in 103 languages on over 25 billion parallel sentences. BIBREF25 further analyze the representations obtained by the encoder of a massively multilingual machine translation system and show that it obtains similar results to mBERT on cross-lingual NLI. Our work, in contrast, focuses on the unsupervised learning of cross-lingual representations and their transfer to discriminative tasks. <<</Related Work>>> <<<Model and Data>>> In this section, we present the training objective, languages, and data we use. We follow the XLM approach BIBREF1 as closely as possible, only introducing changes that improve performance at scale. <<<Masked Language Models.>>> We use a Transformer model BIBREF2 trained with the multilingual MLM objective BIBREF0, BIBREF1 using only monolingual data. We sample streams of text from each language and train the model to predict the masked tokens in the input. We apply subword tokenization directly on raw text data using Sentence Piece BIBREF26 with a unigram language model BIBREF27. We sample batches from different languages using the same sampling distribution as BIBREF1, but with $\alpha =0.3$. Unlike BIBREF1, we do not use language embeddings, which allows our model to better deal with code-switching. We use a large vocabulary size of 250K with a full softmax and train two different models: XLM-R Base (L = 12, H = 768, A = 12, 270M params) and XLM-R (L = 24, H = 1024, A = 16, 550M params). For all of our ablation studies, we use a BERTBase architecture with a vocabulary of 150K tokens. Appendix SECREF8 goes into more details about the architecture of the different models referenced in this paper. <<</Masked Language Models.>>> <<<Scaling to a hundred languages.>>> XLM-R is trained on 100 languages; we provide a full list of languages and associated statistics in Appendix SECREF7. Figure specifies the iso codes of 88 languages that are shared across XLM-R and XLM-100, the model from BIBREF1 trained on Wikipedia text in 100 languages. Compared to previous work, we replace some languages with more commonly used ones such as romanized Hindi and traditional Chinese. In our ablation studies, we always include the 7 languages for which we have classification and sequence labeling evaluation benchmarks: English, French, German, Russian, Chinese, Swahili and Urdu. We chose this set as it covers a suitable range of language families and includes low-resource languages such as Swahili and Urdu. We also consider larger sets of 15, 30, 60 and all 100 languages. When reporting results on high-resource and low-resource, we refer to the average of English and French results, and the average of Swahili and Urdu results respectively. <<</Scaling to a hundred languages.>>> <<<Scaling the Amount of Training Data.>>> Following BIBREF20, we build a clean CommonCrawl Corpus in 100 languages. We use an internal language identification model in combination with the one from fastText BIBREF28. We train language models in each language and use it to filter documents as described in BIBREF20. We consider one CommonCrawl dump for English and twelve dumps for all other languages, which significantly increases dataset sizes, especially for low-resource languages like Burmese and Swahili. Figure shows the difference in size between the Wikipedia Corpus used by mBERT and XLM-100, and the CommonCrawl Corpus we use. As we show in Section SECREF19, monolingual Wikipedia corpora are too small to enable unsupervised representation learning. Based on our experiments, we found that a few hundred MiB of text data is usually a minimal size for learning a BERT model. <<</Scaling the Amount of Training Data.>>> <<</Model and Data>>> <<<Evaluation>>> We consider four evaluation benchmarks. For cross-lingual understanding, we use cross-lingual natural language inference, named entity recognition, and question answering. We use the GLUE benchmark to evaluate the English performance of XLM-R and compare it to other state-of-the-art models. <<<Cross-lingual Natural Language Inference (XNLI).>>> The XNLI dataset comes with ground-truth dev and test sets in 15 languages, and a ground-truth English training set. The training set has been machine-translated to the remaining 14 languages, providing synthetic training data for these languages as well. We evaluate our model on cross-lingual transfer from English to other languages. We also consider three machine translation baselines: (i) translate-test: dev and test sets are machine-translated to English and a single English model is used (ii) translate-train (per-language): the English training set is machine-translated to each language and we fine-tune a multiligual model on each training set (iii) translate-train-all (multi-language): we fine-tune a multilingual model on the concatenation of all training sets from translate-train. For the translations, we use the official data provided by the XNLI project. <<</Cross-lingual Natural Language Inference (XNLI).>>> <<<Named Entity Recognition.>>> For NER, we consider the CoNLL-2002 BIBREF29 and CoNLL-2003 BIBREF30 datasets in English, Dutch, Spanish and German. We fine-tune multilingual models either (1) on the English set to evaluate cross-lingual transfer, (2) on each set to evaluate per-language performance, or (3) on all sets to evaluate multilingual learning. We report the F1 score, and compare to baselines from BIBREF31 and BIBREF32. <<</Named Entity Recognition.>>> <<<Cross-lingual Question Answering.>>> We use the MLQA benchmark from BIBREF7, which extends the English SQuAD benchmark to Spanish, German, Arabic, Hindi, Vietnamese and Chinese. We report the F1 score as well as the exact match (EM) score for cross-lingual transfer from English. <<</Cross-lingual Question Answering.>>> <<<GLUE Benchmark.>>> Finally, we evaluate the English performance of our model on the GLUE benchmark BIBREF33 which gathers multiple classification tasks, such as MNLI BIBREF4, SST-2 BIBREF34, or QNLI BIBREF35. We use BERTLarge and RoBERTa as baselines. <<</GLUE Benchmark.>>> <<</Evaluation>>> <<<Analysis and Results>>> In this section, we perform a comprehensive analysis of multilingual masked language models. We conduct most of the analysis on XNLI, which we found to be representative of our findings on other tasks. We then present the results of XLM-R on cross-lingual understanding and GLUE. Finally, we compare multilingual and monolingual models, and present results on low-resource languages. <<<Improving and Understanding Multilingual Masked Language Models>>> Much of the work done on understanding the cross-lingual effectiveness of mBERT or XLM BIBREF8, BIBREF9, BIBREF7 has focused on analyzing the performance of fixed pretrained models on downstream tasks. In this section, we present a comprehensive study of different factors that are important to pretraining large scale multilingual models. We highlight the trade-offs and limitations of these models as we scale to one hundred languages. <<<Transfer-dilution trade-off and Curse of Multilinguality.>>> Model capacity (i.e. the number of parameters in the model) is constrained due to practical considerations such as memory and speed during training and inference. For a fixed sized model, the per-language capacity decreases as we increase the number of languages. While low-resource language performance can be improved by adding similar higher-resource languages during pretraining, the overall downstream performance suffers from this capacity dilution BIBREF24. Positive transfer and capacity dilution have to be traded off against each other. We illustrate this trade-off in Figure , which shows XNLI performance vs the number of languages the model is pretrained on. Initially, as we go from 7 to 15 languages, the model is able to take advantage of positive transfer and this improves performance, especially on low resource languages. Beyond this point the curse of multilinguality kicks in and degrades performance across all languages. Specifically, the overall XNLI accuracy decreases from 71.8% to 67.7% as we go from XLM-7 to XLM-100. The same trend can be observed for models trained on the larger CommonCrawl Corpus. The issue is even more prominent when the capacity of the model is small. To show this, we pretrain models on Wikipedia Data in 7, 30 and 100 languages. As we add more languages, we make the Transformer wider by increasing the hidden size from 768 to 960 to 1152. In Figure , we show that the added capacity allows XLM-30 to be on par with XLM-7, thus overcoming the curse of multilinguality. The added capacity for XLM-100, however, is not enough and it still lags behind due to higher vocabulary dilution (recall from Section SECREF3 that we used a fixed vocabulary size of 150K for all models). <<</Transfer-dilution trade-off and Curse of Multilinguality.>>> <<<High-resource/Low-resource trade-off.>>> The allocation of the model capacity across languages is controlled by several parameters: the training set size, the size of the shared subword vocabulary, and the rate at which we sample training examples from each language. We study the effect of sampling on the performance of high-resource (English and French) and low-resource (Swahili and Urdu) languages for an XLM-100 model trained on Wikipedia (we observe a similar trend for the construction of the subword vocab). Specifically, we investigate the impact of varying the $\alpha $ parameter which controls the exponential smoothing of the language sampling rate. Similar to BIBREF1, we use a sampling rate proportional to the number of sentences in each corpus. Models trained with higher values of $\alpha $ see batches of high-resource languages more often. Figure shows that the higher the value of $\alpha $, the better the performance on high-resource languages, and vice-versa. When considering overall performance, we found $0.3$ to be an optimal value for $\alpha $, and use this for XLM-R. <<</High-resource/Low-resource trade-off.>>> <<<Importance of Capacity and Vocabulary Size.>>> In previous sections and in Figure , we showed the importance of scaling the model size as we increase the number of languages. Similar to the overall model size, we argue that scaling the size of the shared vocabulary (the vocabulary capacity) can improve the performance of multilingual models on downstream tasks. To illustrate this effect, we train XLM-100 models on Wikipedia data with different vocabulary sizes. We keep the overall number of parameters constant by adjusting the width of the transformer. Figure shows that even with a fixed capacity, we observe a 2.8% increase in XNLI average accuracy as we increase the vocabulary size from 32K to 256K. This suggests that multilingual models can benefit from allocating a higher proportion of the total number of parameters to the embedding layer even though this reduces the size of the Transformer. With bigger models, we believe that using a vocabulary of up to 2 million tokens with an adaptive softmax BIBREF36, BIBREF37 should improve performance even further, but we leave this exploration to future work. For simplicity and given the computational constraints, we use a vocabulary of 250k for XLM-R. We further illustrate the importance of this parameter, by training three models with the same transformer architecture (BERTBase) but with different vocabulary sizes: 128K, 256K and 512K. We observe more than 3% gains in overall accuracy on XNLI by simply increasing the vocab size from 128k to 512k. <<</Importance of Capacity and Vocabulary Size.>>> <<<Importance of large-scale training with more data.>>> As shown in Figure , the CommonCrawl Corpus that we collected has significantly more monolingual data than the previously used Wikipedia corpora. Figure shows that for the same BERTBase architecture, all models trained on CommonCrawl obtain significantly better performance. Apart from scaling the training data, BIBREF10 also showed the benefits of training MLMs longer. In our experiments, we observed similar effects of large-scale training, such as increasing batch size (see Figure ) and training time, on model performance. Specifically, we found that using validation perplexity as a stopping criterion for pretraining caused the multilingual MLM in BIBREF1 to be under-tuned. In our experience, performance on downstream tasks continues to improve even after validation perplexity has plateaued. Combining this observation with our implementation of the unsupervised XLM-MLM objective, we were able to improve the performance of BIBREF1 from 71.3% to more than 75% average accuracy on XNLI, which was on par with their supervised translation language modeling (TLM) objective. Based on these results, and given our focus on unsupervised learning, we decided to not use the supervised TLM objective for training our models. <<</Importance of large-scale training with more data.>>> <<<Simplifying multilingual tokenization with Sentence Piece.>>> The different language-specific tokenization tools used by mBERT and XLM-100 make these models more difficult to use on raw text. Instead, we train a Sentence Piece model (SPM) and apply it directly on raw text data for all languages. We did not observe any loss in performance for models trained with SPM when compared to models trained with language-specific preprocessing and byte-pair encoding (see Figure ) and hence use SPM for XLM-R. <<</Simplifying multilingual tokenization with Sentence Piece.>>> <<</Improving and Understanding Multilingual Masked Language Models>>> <<<Cross-lingual Understanding Results>>> Based on these results, we adapt the setting of BIBREF1 and use a large Transformer model with 24 layers and 1024 hidden states, with a 250k vocabulary. We use the multilingual MLM loss and train our XLM-R model for 1.5 Million updates on five hundred 32GB Nvidia V100 GPUs with a batch size of 8192. We leverage the SPM-preprocessed text data from CommonCrawl in 100 languages and sample languages with $\alpha =0.3$. In this section, we show that it outperforms all previous techniques on cross-lingual benchmarks while getting performance on par with RoBERTa on the GLUE benchmark. <<<XNLI.>>> Table shows XNLI results and adds some additional details: (i) the number of models the approach induces (#M), (ii) the data on which the model was trained (D), and (iii) the number of languages the model was pretrained on (#lg). As we show in our results, these parameters significantly impact performance. Column #M specifies whether model selection was done separately on the dev set of each language ($N$ models), or on the joint dev set of all the languages (single model). We observe a 0.6 decrease in overall accuracy when we go from $N$ models to a single model - going from 71.3 to 70.7. We encourage the community to adopt this setting. For cross-lingual transfer, while this approach is not fully zero-shot transfer, we argue that in real applications, a small amount of supervised data is often available for validation in each language. XLM-R sets a new state of the art on XNLI. On cross-lingual transfer, XLM-R obtains 80.1% accuracy, outperforming the XLM-100 and mBERT open-source models by 9.4% and 13.8% average accuracy. On the Swahili and Urdu low-resource languages, XLM-R outperforms XLM-100 by 13.8% and 9.3%, and mBERT by 21.6% and 13.7%. While XLM-R handles 100 languages, we also show that it outperforms the former state of the art Unicoder BIBREF17 and XLM (MLM+TLM), which handle only 15 languages, by 4.7% and 5% average accuracy respectively. Using the multilingual training of translate-train-all, XLM-R further improves performance and reaches 82.4% accuracy, a new overall state of the art for XNLI, outperforming Unicoder by 3.9%. Multilingual training is similar to practical applications where training sets are available in various languages for the same task. In the case of XNLI, datasets have been translated, and translate-train-all can be seen as some form of cross-lingual data augmentation BIBREF18, similar to back-translation BIBREF38. <<</XNLI.>>> <<<Question Answering.>>> We also obtain new state of the art results on the MLQA cross-lingual question answering benchmark, introduced by BIBREF7. We follow their procedure by training on the English training data and evaluating on the 7 languages of the dataset. We report results in Table . XLM-R obtains F1 and accuracy scores of 70.0% and 52.2% while the previous state of the art was 61.6% and 43.5%. XLM-R also outperforms mBERT by 12.3% F1-score and 10.6% accuracy. It even outperforms BERT-Large on English, confirming its strong monolingual performance. <<</Question Answering.>>> <<</Cross-lingual Understanding Results>>> <<<Multilingual versus Monolingual>>> In this section, we present results of multilingual XLM models against monolingual BERT models. <<<GLUE: XLM-R versus RoBERTa.>>> Our goal is to obtain a multilingual model with strong performance on both, cross-lingual understanding tasks as well as natural language understanding tasks for each language. To that end, we evaluate XLM-R on the GLUE benchmark. We show in Table , that XLM-R obtains better average dev performance than BERTLarge by 1.3% and reaches performance on par with XLNetLarge. The RoBERTa model outperforms XLM-R by only 1.3% on average. We believe future work can reduce this gap even further by alleviating the curse of multilinguality and vocabulary dilution. These results demonstrate the possibility of learning one model for many languages while maintaining strong performance on per-language downstream tasks. <<</GLUE: XLM-R versus RoBERTa.>>> <<<XNLI: XLM versus BERT.>>> A recurrent criticism against multilingual model is that they obtain worse performance than their monolingual counterparts. In addition to the comparison of XLM-R and RoBERTa, we provide the first comprehensive study to assess this claim on the XNLI benchmark. We extend our comparison between multilingual XLM models and monolingual BERT models on 7 languages and compare performance in Table . We train 14 monolingual BERT models on Wikipedia and CommonCrawl, and two XLM-7 models. We add slightly more capacity in the vocabulary size of the multilingual model for a better comparison. To our surprise - and backed by further study on internal benchmarks - we found that multilingual models can outperform their monolingual BERT counterparts. Specifically, in Table , we show that for cross-lingual transfer, monolingual baselines outperform XLM-7 for both Wikipedia and CC by 1.6% and 1.3% average accuracy. However, by making use of multilingual training (translate-train-all) and leveraging training sets coming from multiple languages, XLM-7 can outperform the BERT models: our XLM-7 trained on CC obtains 80.0% average accuracy on the 7 languages, while the average performance of monolingual BERT models trained on CC is 77.5%. This is a surprising result that shows that the capacity of multilingual models to leverage training data coming from multiple languages for a particular task can overcome the capacity dilution problem to obtain better overall performance. <<</XNLI: XLM versus BERT.>>> <<</Multilingual versus Monolingual>>> <<<Representation Learning for Low-resource Languages>>> We observed in Table that pretraining on Wikipedia for Swahili and Urdu performed similarly to a randomly initialized model; most likely due to the small size of the data for these languages. On the other hand, pretraining on CC improved performance by up to 10 points. This confirms our assumption that mBERT and XLM-100 rely heavily on cross-lingual transfer but do not model the low-resource languages as well as XLM-R. Specifically, in the translate-train-all setting, we observe that the biggest gains for XLM models trained on CC, compared to their Wikipedia counterparts, are on low-resource languages; 7% and 4.8% improvement on Swahili and Urdu respectively. <<</Representation Learning for Low-resource Languages>>> <<</Analysis and Results>>> <<<Conclusion>>> In this work, we introduced XLM-R, our new state of the art multilingual masked language model trained on 2.5 TB of newly created clean CommonCrawl data in 100 languages. We show that it provides strong gains over previous multilingual models like mBERT and XLM on classification, sequence labeling and question answering. We exposed the limitations of multilingual MLMs, in particular by uncovering the high-resource versus low-resource trade-off, the curse of multilinguality and the importance of key hyperparameters. We also expose the surprising effectiveness of multilingual models over monolingual models, and show strong improvements on low-resource languages. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nRelated Work\nModel and Data\nMasked Language Models.\nScaling to a hundred languages.\nScaling the Amount of Training Data.\nEvaluation\nCross-lingual Natural Language Inference (XNLI).\nNamed Entity Recognition.\nCross-lingual Question Answering.\nGLUE Benchmark.\nAnalysis and Results\nImproving and Understanding Multilingual Masked Language Models\nTransfer-dilution trade-off and Curse of Multilinguality.\nHigh-resource/Low-resource trade-off.\nImportance of Capacity and Vocabulary Size.\nImportance of large-scale training with more data.\nSimplifying multilingual tokenization with Sentence Piece.\nCross-lingual Understanding Results\nXNLI.\nQuestion Answering.\nMultilingual versus Monolingual\nGLUE: XLM-R versus RoBERTa.\nXNLI: XLM versus BERT.\nRepresentation Learning for Low-resource Languages\nConclusion" ], "type": "outline" }
1912.03184
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> GoodNewsEveryone: A Corpus of News Headlines Annotated with Emotions, Semantic Roles, and Reader Perception <<<Abstract>>> Most research on emotion analysis from text focuses on the task of emotion classification or emotion intensity regression. Fewer works address emotions as structured phenomena, which can be explained by the lack of relevant datasets and methods. We fill this gap by releasing a dataset of 5000 English news headlines annotated via crowdsourcing with their dominant emotions, emotion experiencers and textual cues, emotion causes and targets, as well as the reader's perception and emotion of the headline. We propose a multiphase annotation procedure which leads to high quality annotations on such a task via crowdsourcing. Finally, we develop a baseline for the task of automatic prediction of structures and discuss results. The corpus we release enables further research on emotion classification, emotion intensity prediction, emotion cause detection, and supports further qualitative studies. <<</Abstract>>> <<<Introduction>>> Research in emotion analysis from text focuses on mapping words, sentences, or documents to emotion categories based on the models of Ekman1992 or Plutchik2001, which propose the emotion classes of joy, sadness, anger, fear, trust, disgust, anticipation and surprise. Emotion analysis has been applied to a variety of tasks including large scale social media mining BIBREF0, literature analysis BIBREF1, BIBREF2, lyrics and music analysis BIBREF3, BIBREF4, and the analysis of the development of emotions over time BIBREF5. There are at least two types of questions which cannot yet be answered by these emotion analysis systems. Firstly, such systems do not often explicitly model the perspective of understanding the written discourse (reader, writer, or the text's point of view). For example, the headline “Djokovic happy to carry on cruising” BIBREF6 contains an explicit mention of joy carried by the word “happy”. However, it may evoke different emotions in a reader (e. g., the reader is a supporter of Roger Federer), and the same applies to the author of the headline. To the best of our knowledge, only one work takes this point into consideration BIBREF7. Secondly, the structure that can be associated with the emotion description in text is not uncovered. Questions like: “Who feels a particular emotion?” or “What causes that emotion?” still remain unaddressed. There has been almost no work in this direction, with only few exceptions in English BIBREF8, BIBREF9 and Mandarin BIBREF10, BIBREF11. With this work, we argue that emotion analysis would benefit from a more fine-grained analysis that considers the full structure of an emotion, similar to the research in aspect-based sentiment analysis BIBREF12, BIBREF13, BIBREF14, BIBREF15. Consider the headline: “A couple infuriated officials by landing their helicopter in the middle of a nature reserve” BIBREF16 depicted on Figure FIGREF1. One could mark “officials” as the experiencer, “a couple” as the target, and “landing their helicopter in the middle of a nature reserve” as the cause of anger. Now let us imagine that the headline starts with “A cheerful couple” instead of “A couple”. A simple approach to emotion detection based on cue words will capture that this sentence contains descriptions of anger (“infuriated”) and joy (“cheerful”). It would, however, fail in attributing correct roles to the couple and the officials, thus, the distinction between their emotion experiences would remain hidden from us. In this study, we focus on an annotation task with the goal of developing a dataset that would enable addressing the issues raised above. Specifically, we introduce the corpus GoodNewsEveryone, a novel dataset of news English headlines collected from 82 different sources analyzed in the Media Bias Chart BIBREF17 annotated for emotion class, emotion intensity, semantic roles (experiencer, cause, target, cue), and reader perspective. We use semantic roles, since identifying who feels what and why is essentially a semantic role labeling task BIBREF18. The roles we consider are a subset of those defined for the semantic frame for “Emotion” in FrameNet BIBREF19. We focus on news headlines due to their brevity and density of contained information. Headlines often appeal to a reader's emotions, and hence are a potential good source for emotion analysis. In addition, news headlines are easy-to-obtain data across many languages, void of data privacy issues associated with social media and microblogging. Our contributions are: (1) we design a two phase annotation procedure for emotion structures via crowdsourcing, (2) present the first resource of news headlines annotated for emotions, cues, intensity, experiencers, causes, targets, and reader emotion, and, (3), provide results of a baseline model to predict such roles in a sequence labeling setting. We provide our annotations at http://www.romanklinger.de/data-sets/GoodNewsEveryone.zip. <<</Introduction>>> <<<Related Work>>> Our annotation is built upon different tasks and inspired by different existing resources, therefore it combines approaches from each of those. In what follows, we look at related work on each task and specify how it relates to our new corpus. <<<Emotion Classification>>> Emotion classification deals with mapping words, sentences, or documents to a set of emotions following psychological models such as those proposed by Ekman1992 (anger, disgust, fear, joy, sadness and surprise) or Plutchik2001; or continuous values of valence, arousal and dominance BIBREF20. One way to create annotated datasets is via expert annotation BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF7. The creators of the ISEAR dataset make use of self-reporting instead, where subjects are asked to describe situations associated with a specific emotion BIBREF25. Crowdsourcing is another popular way to acquire human judgments BIBREF26, BIBREF9, BIBREF9, BIBREF27, BIBREF28. Another recent dataset for emotion recognition reproduces the ISEAR dataset in a crowdsourcing setting for both English and German BIBREF29. Lastly, social network platforms play a central role in data acquisition with distant supervision, because they provide a cheap way to obtain large amounts of noisy data BIBREF26, BIBREF9, BIBREF30, BIBREF31. Table TABREF3 shows an overview of resources. More details could be found in Bostan2018. <<</Emotion Classification>>> <<<Emotion Intensity>>> In emotion intensity prediction, the term intensity refers to the degree an emotion is experienced. For this task, there are only a few datasets available. To our knowledge, the first dataset annotated for emotion intensity is by Aman2007, who ask experts for ratings, followed by the datasets released for the EmoInt shared tasks BIBREF32, BIBREF28, both annotated via crowdsourcing through the best-worst scaling. The annotation task can also be formalized as a classification task, similarly to the emotion classification task, where the goal would be to map some textual input to a class from a set of predefined classes of emotion intensity categories. This approach is used by Aman2007, where they annotate high, moderate, and low. <<</Emotion Intensity>>> <<<Cue or Trigger Words>>> The task of finding a function that segments a textual input and finds the span indicating an emotion category is less researched. Cue or trigger words detection could also be formulated as an emotion classification task for which the set of classes to be predicted is extended to cover other emotion categories with cues. First work that annotated cues was done manually by one expert and three annotators on the domain of blog posts BIBREF21. Mohammad2014 annotates the cues of emotions in a corpus of $4,058$ electoral tweets from US via crowdsourcing. Similar in annotation procedure, Yan2016emocues curate a corpus of 15,553 tweets and annotate it with 28 emotion categories, valence, arousal, and cues. To the best of our knowledge, there is only one work BIBREF8 that leverages the annotations for cues and considers the task of emotion detection where the exact spans that represent the cues need to be predicted. <<</Cue or Trigger Words>>> <<<Emotion Cause Detection>>> Detecting the cause of an expressed emotion in text received relatively little attention, compared to emotion detection. There are only few works on English that focus on creating resources to tackle this task BIBREF23, BIBREF9, BIBREF8, BIBREF33. The task can be formulated in different ways. One is to define a closed set of potential causes after annotation. Then, cause detection is a classification task BIBREF9. Another setting is to find the cause in the text. This is formulated as segmentation or clause classification BIBREF23, BIBREF8. Finding the cause of an emotion is widely researched on Mandarin in both resource creation and methods. Early works build on rule-based systems BIBREF34, BIBREF35, BIBREF36 which examine correlations between emotions and cause events in terms of linguistic cues. The works that follow up focus on both methods and corpus construction, showing large improvements over the early works BIBREF37, BIBREF38, BIBREF33, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF43, BIBREF11. The most recent work on cause extraction is being done on Mandarin and formulates the task jointly with emotion detection BIBREF10, BIBREF44, BIBREF45. With the exception of Mohammad2014 who is annotating via crowdsourcing, all other datasets are manually labeled, usually by using the W3C Emotion Markup Language. <<</Emotion Cause Detection>>> <<<Semantic Role Labeling of Emotions>>> Semantic role labeling in the context of emotion analysis deals with extracting who feels (experiencer) which emotion (cue, class), towards whom the emotion is expressed (target), and what is the event that caused the emotion (stimulus). The relations are defined akin to FrameNet's Emotion frame BIBREF19. There are two works that work on annotation of semantic roles in the context of emotion. Firstly, Mohammad2014 annotate a dataset of $4,058$ tweets via crowdsourcing. The tweets were published before the U.S. presidential elections in 2012. The semantic roles considered are the experiencer, the stimulus, and the target. However, in the case of tweets, the experiencer is mostly the author of the tweet. Secondly, Kim2018 annotate and release REMAN (Relational EMotion ANnotation), a corpus of $1,720$ paragraphs based on Project Gutenberg. REMAN was manually annotated for spans which correspond to emotion cues and entities/events in the roles of experiencers, targets, and causes of the emotion. They also provide baseline results for the automatic prediction of these structures and show that their models benefit from joint modeling of emotions with its roles in all subtasks. Our work follows in motivation Kim2018 and in procedure Mohammad2014. <<</Semantic Role Labeling of Emotions>>> <<<Reader vs. Writer vs. Text Perspective>>> Studying the impact of different annotation perspectives is another little explored area. There are few exceptions in sentiment analysis which investigate the relation between sentiment of a blog post and the sentiment of their comments BIBREF46 or model the emotion of a news reader jointly with the emotion of a comment writer BIBREF47. Fewer works exist in the context of emotion analysis. 5286061 deal with writer's and reader's emotions on online blogs and find that positive reader emotions tend to be linked to positive writer emotions. Buechel2017b and buechel-hahn-2017-emobank look into the effects of different perspectives on annotation quality and find that the reader perspective yields better inter-annotator agreement values. <<</Reader vs. Writer vs. Text Perspective>>> <<</Related Work>>> <<<Data Collection & Annotation>>> We gather the data in three steps: (1) collecting the news and the reactions they elicit in social media, (2) filtering the resulting set to retain relevant items, and (3) sampling the final selection using various metrics. The headlines are then annotated via crowdsourcing in two phases by three annotators in the first phase and by five annotators in the second phase. As a last step, the annotations are adjudicated to form the gold standard. We describe each step in detail below. <<<Collecting Headlines>>> The first step consists of retrieving news headlines from the news publishers. We further retrieve content related to a news item from social media: tweets mentioning the headlines together with replies and Reddit posts that link to the headlines. We use this additional information for subsampling described later. We manually select all news sources available as RSS feeds (82 out of 124) from the Media Bias Chart BIBREF48, a project that analyzes reliability (from original fact reporting to containing inaccurate/fabricated information) and political bias (from most extreme left to most extreme right) of U.S. news sources. Our news crawler retrieved daily headlines from the feeds, together with the attached metadata (title, link, and summary of the news article) from March 2019 until October 2019. Every day, after the news collection finished, Twitter was queried for 50 valid tweets for each headline. In addition to that, for each collected tweet, we collect all valid replies and counts of being favorited, retweeted and replied to in the first 24 hours after its publication. The last step in the pipeline is aquiring the top (“hot”) submissions in the /r/news, /r/worldnews subreddits, and their metadata, including the number of up and downvotes, upvote ratio, number of comments, and comments themselves. <<</Collecting Headlines>>> <<<Filtering & Postprocessing>>> We remove any headlines that have less than 6 tokens (e. g., “Small or nothing”, “But Her Emails”, “Red for Higher Ed”), as well as those starting with certain phrases, such as “Ep.”,“Watch Live:”, “Playlist:”, “Guide to”, and “Ten Things”. We also filter-out headlines that contain a date (e. g., “Headlines for March 15, 2019”) and words from the headlines which refer to visual content, like “video”, “photo”, “image”, “graphic”, “watch”, etc. <<</Filtering & Postprocessing>>> <<<Sampling Headlines>>> We stratify the remaining headlines by source (150 headlines from each source) and subsample equally according to the following strategies: 1) randomly select headlines, 2) select headlines with high count of emotion terms, 3) select headlines that contain named entities, and 4) select the headlines with high impact on social media. Table TABREF16 shows how many headlines are selected by each sampling method in relation to the most dominant emotion (see Section SECREF25). <<<Random Sampling.>>> The goal of the first sampling method is to collect a random sample of headlines that is representative and not biased towards any source or content type. Note that the sample produced using this strategy might not be as rich with emotional content as the other samples. <<</Random Sampling.>>> <<<Sampling via NRC.>>> For the second sampling strategy we hypothesize that headlines containing emotionally charged words are also likely to contain the structures we aim to annotate. This strategy selects headlines whose words are in the NRC dictionary BIBREF49. <<</Sampling via NRC.>>> <<<Sampling Entities.>>> We further hypothesize that headlines that mention named entities may also contain experiencers or targets of emotions, and therefore, they are likely to present a complete emotion structure. This sampling method yields headlines that contain at least one entity name, according to the recognition from spaCy that is trained on OntoNotes 5 and on Wikipedia corpus. We consider organization names, persons, nationalities, religious, political groups, buildings, countries, and other locations. <<</Sampling Entities.>>> <<<Sampling based on Reddit & Twitter.>>> The last sampling strategy involves our Twitter and Reddit metadata. This enables us to select and sample headlines based on their impact on social media (under the assumption that this correlates with emotion connotation of the headline). This strategy chooses them equally from the most favorited tweets, most retweeted headlines on Twitter, most replied to tweets on Twitter, as well as most upvoted and most commented on posts on Reddit. <<</Sampling based on Reddit & Twitter.>>> <<</Sampling Headlines>>> <<<Annotation Procedure>>> Using these sampling and filtering methods, we select $9,932$ headlines. Next, we set up two questionnaires (see Table TABREF17) for the two annotation phases that we describe below. We use Figure Eight. <<<Phase 1: Selecting Emotional Headlines>>> The first questionnaire is meant to determine the dominant emotion of a headline, if that exists, and whether the headline triggers an emotion in a reader. We hypothesize that these two questions help us to retain only relevant headlines for the next, more expensive, annotation phase. During this phase, $9,932$ headlines were annotated by three annotators. The first question of the first phase (P1Q1) is: “Which emotion is most dominant in the given headline?” and annotators are provided a closed list of 15 emotion categories to which the category No emotion was added. The second question (P1Q2) aims to answer whether a given headline would stir up an emotion in most readers and the annotators are provided with only two possible answers (yes or no, see Table TABREF17 and Figure FIGREF1 for details). Our set of 15 emotion categories is an extended set over Plutchik's emotion classes and comprises anger, annoyance, disgust, fear, guilt, joy, love, pessimism, negative surprise, optimism, positive surprise, pride, sadness, shame, and trust. Such a diverse set of emotion labels is meant to provide a more fine-grained analysis and equip the annotators with a wider range of answer choices. <<</Phase 1: Selecting Emotional Headlines>>> <<<Phase 2: Emotion and Role Annotation>>> The annotations collected during the first phase are automatically ranked and the ranking is used to decide which headlines are further annotated in the second phase. Ranking consists of sorting by agreement on P1Q1, considering P1Q2 in the case of ties. The top $5,000$ ranked headlines are annotated by five annotators for emotion class, intensity, reader emotion, and other emotions in case there is not only a dominant emotion. Along with these closed annotation tasks, the annotators are asked to answer several open questions, namely (1) who is the experiencer of the emotion (if mentioned), (2) what event triggered the annotated emotion (if mentioned), (3) if the emotion had a target, and (4) who or what is the target. The annotators are free to select multiple instances related to the dominant emotion by copy-paste into the answer field. For more details on the exact questions and example of answers, see Table TABREF17. Figure FIGREF1 shows a depiction of the procedure. <<</Phase 2: Emotion and Role Annotation>>> <<<Quality Control and Results>>> To control the quality, we ensured that a single annotator annotates maximum 120 headlines (this protects the annotators from reading too many news headlines and from dominating the annotations). Secondly, we let only annotators who geographically reside in the U.S. contribute to the task. We test the annotators on a set of $1,100$ test questions for the first phase (about 10% of the data) and 500 for the second phase. Annotators were required to pass 95%. The questions were generated based on hand-picked non-ambiguous real headlines through swapping out relevant words from the headline in order to obtain a different annotation, for instance, for “Djokovic happy to carry on cruising”, we would swap “Djokovic” with a different entity, the cue “happy” to a different emotion expression. Further, we exclude Phase 1 annotations that were done in less than 10 seconds and Phase 2 annotations that were done in less than 70 seconds. After we collected all annotations, we found unreliable annotators for both phases in the following way: for each annotator and for each question, we compute the probability with which the annotator agrees with the response chosen by the majority. If the computed probability is more than two standard deviations away from the mean we discard all annotations done by that annotator. On average, 310 distinct annotators needed 15 seconds in the first phase. We followed the guidelines of the platform regarding payment and decided to pay for each judgment $$0.02$ (USD) for Phase 1 (total of $$816.00$ USD). For the second phase, 331 distinct annotators needed on average $\approx $1:17 minutes to perform one judgment. Each judgment was paid with $0.08$$ USD (total $$2,720.00$ USD). <<</Quality Control and Results>>> <<</Annotation Procedure>>> <<<Adjudication of Annotations>>> In this section, we describe the adjudication process we undertook to create the gold dataset and the difficulties we faced in creating a gold set out of the collected annotations. The first step was to discard obviously wrong annotations for open questions, such as annotations in other languages than English, or annotations of spans that were not part of the headline. In the next step, we incrementally apply a set of rules to the annotated instances in a one-or-nothing fashion. Specifically, we incrementally test each instance for a number of criteria in such a way that if at least one criteria is satisfied the instance is accepted and its adjudication is finalized. Instances that do not satisfy at least one criterium are adjudicated manually. <<<Relative Majority Rule.>>> This filter is applied to all questions regardless of their type. Effectively, whenever an entire annotation is agreed upon by at least two annotators, we use all parts of this annotation as the gold annotation. Given the headline depicted in Figure FIGREF1 with the following target role annotations by different annotators: “A couple”, “None”, “A couple”, “officials”, “their helicopter”. The resulting gold annotation is “A couple” and the adjudication process for the target ends. <<</Relative Majority Rule.>>> <<<Most Common Subsequence Rule.>>> This rule is only applied to open text questions. It takes the most common smallest string intersection of all annotations. In the headline above, the experiencer annotations “A couple”, “infuriated officials”, “officials”, “officials”, “infuriated officials” would lead to “officials”. <<</Most Common Subsequence Rule.>>> <<<Longest Common Subsequence Rule.>>> This rule is only applied two different intersections are the most common (previous rule), and these two intersect. We then accept the longest common subsequence. Revisiting the example for deciding on the cause role with the annotations “by landing their helicopter in the nature reserve”, “by landing their helicopter”, “landing their helicopter in the nature reserve”, “a couple infuriated officials”, “infuriated” the adjudicated gold is “landing their helicopter in the nature reserve”. Table TABREF27 shows through examples of how each rule works and how many instances are “solved” by each adjudication rule. <<</Longest Common Subsequence Rule.>>> <<<Noun Chunks>>> For the role of experiencer, we accept only the most-common noun-chunk(s). The annotations that are left after being processed by all the rules described above are being adjudicated manually by the authors of the paper. We show examples for all roles in Table TABREF29. <<</Noun Chunks>>> <<</Adjudication of Annotations>>> <<</Data Collection & Annotation>>> <<<Analysis>>> <<<Inter-Annotator Agreement>>> We calculate the agreement on the full set of annotations from each phase for the two question types, namely open vs. closed, where the first deal with emotion classification and second with the roles cue, experiencer, cause, and target. <<<Emotion>>> We use Fleiss' Kappa ($\kappa $) to measure the inter-annotator agreement for closed questions BIBREF50, BIBREF51. In addition, we report the average percentage of overlaps between all pairs of annotators (%) and the mean entropy of annotations in bits. Higher agreement correlates with lower entropy. As Table TABREF38 shows, the agreement on the question whether a headline is emotional or not obtains the highest agreement ($0.34$), followed by the question on intensity ($0.22$). The lowest agreement is on the question to find the most dominant emotion ($0.09$). All metrics show comparably low agreement on the closed questions, especially on the question of the most dominant emotion. This is reasonable, given that emotion annotation is an ambiguous, subjective, and difficult task. This aspect lead to the decision of not purely calculating a majority vote label but to consider the diversity in human interpretation of emotion categories and publish the annotations by all annotators. Table TABREF40 shows the counts of annotators agreeing on a particular emotion. We observe that Love, Pride, and Sadness show highest intersubjectivity followed closely by Fear and Joy. Anger and Annoyance show, given their similarity, lower scores. Note that the micro average of the basic emotions (+ love) is $0.21$ for when more than five annotators agree. <<</Emotion>>> <<<Roles>>> Table TABREF41 presents the mean of pair-wise inter-annotator agreement for each role. We report average pair-wise Fleiss' $\kappa $, span-based exact $\textrm {F}_1$ over the annotated spans, accuracy, proportional token overlap, and the measure of agreement on set-valued items, MASI BIBREF52. We observe a fair agreement on the open annotation tasks. The highest agreement is for the role of the Experiencer, followed by Cue, Cause, and Target. This seems to correlate with the length of the annotated spans (see Table TABREF42). This finding is consistent with Kim2018. Presumably, Experiencers are easier to annotate as they often are noun phrases whereas causes can be convoluted relative clauses. <<</Roles>>> <<</Inter-Annotator Agreement>>> <<<General Corpus Statistics>>> In the following, we report numbers of the adjudicated data set for simplicity of discussion. Please note that we publish all annotations by all annotators and suggest that computational models should consider the distribution of annotations instead of one adjudicated gold. The latter for be a simplification which we consider to not be appropriate. GoodNewsEveryone contains $5,000$ headlines from various news sources described in the Media Bias Chart BIBREF17. Overall, the corpus is composed of $56,612$ words ($354,173$ characters) out of which $17,513$ are unique. The headline length is short with 11 words on average. The shortest headline contains 6 words while the longest headline contains 32 words. The length of a headline in characters ranges from 24 the shortest to 199 the longest. Table TABREF42 presents the total number of adjudicated annotations for each role in relation to the dominant emotion. GoodNewsEveryone consists of $5,000$ headlines, $3,312$ of which have annotated dominant emotion via majority vote. The rest of $1,688$ headlines (up to $5,000$) ended in ties for the most dominant emotion category and were adjudicated manually. The emotion category Negative Surprise has the highest number of annotations, while Love has the lowest number of annotations. In most cases, Cues are single tokens (e. g., “infuriates”, “slams”), Cause has the largest proportion of annotations that span more than seven tokens on average (65% out of all annotations in this category), For the role of Experiencer, we see the lowest number of annotations (19%), which is a very different result to the one presented by Kim2018, where the role Experiencer was the most annotated. We hypothesize that this is the effect of the domain we annotated; it is more likely to encounter explicit experiencers in literature (as literary characters) than in news headlines. As we can see, the cue and the cause relations dominate the dataset (27% each), followed by Target (25%) relations. Table TABREF42 also shows how many times each emotion triggered a certain relation. In this sense, Negative Surprise and Positive Surprise has triggered the most Experiencer, and Cause and Target relations, which due to the prevalence of the annotations for this emotion in the dataset. Further, Figure FIGREF44, shows the distances of the different roles from the cue. The causes and targets are predominantly realized right of the cue, while the experiencer occurs more often left of the cue. <<</General Corpus Statistics>>> <<</Analysis>>> <<<Baseline>>> As an estimate for the difficulty of the task, we provide baseline results. We formulate the task as sequence labeling of emotion cues, mentions of experiencers, targets, and causes with a bidirectional long short-term memory networks with a CRF layer (biLSTM-CRF) that uses Elmo embeddings as input and an IOB alphabet as output. The results are shown in Table TABREF45. <<</Baseline>>> <<<Conclusion & Future Work>>> We introduce GoodNewsEveryone, a corpus of $5,000$ headlines annotated for emotion categories, semantic roles, and reader perspective. Such a dataset enables answering instance-based questions, such as, “who is experiencing what emotion and why?” or more general questions, like “what are typical causes of joy in media?”. To annotate the headlines, we employ a two-phase procedure and use crowdsourcing. To obtain a gold dataset, we aggregate the annotations through automatic heuristics. As the evaluation of the inter-annotator agreement and the baseline model results show, the task of annotating structures encompassing emotions with the corresponding roles is a very difficult one. However, we also note that developing such a resource via crowdsourcing has its limitations, due to the subjective nature of emotions, it is very challenging to come up with an annotation methodology that would ensure less dissenting annotations for the domain of headlines. We release the raw dataset, the aggregated gold dataset, the carefully designed questionnaires, and baseline models as a freely available repository (partially only after acceptance of the paper). The released dataset will be useful for social science scholars, since it contains valuable information about the interactions of emotions in news headlines, and gives interesting insights into the language of emotion expression in media. Note that this dataset is also useful since it introduces a new dataset to test on structured prediction models. We are currently investigating the dataset for understanding the interaction between media bias and annotated emotions and roles. <<</Conclusion & Future Work>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nRelated Work\nEmotion Classification\nEmotion Intensity\nCue or Trigger Words\nEmotion Cause Detection\nSemantic Role Labeling of Emotions\nReader vs. Writer vs. Text Perspective\nData Collection & Annotation\nCollecting Headlines\nFiltering & Postprocessing\nSampling Headlines\nRandom Sampling.\nSampling via NRC.\nSampling Entities.\nSampling based on Reddit & Twitter.\nAnnotation Procedure\nPhase 1: Selecting Emotional Headlines\nPhase 2: Emotion and Role Annotation\nQuality Control and Results\nAdjudication of Annotations\nRelative Majority Rule.\nMost Common Subsequence Rule.\nLongest Common Subsequence Rule.\nNoun Chunks\nAnalysis\nInter-Annotator Agreement\nEmotion\nRoles\nGeneral Corpus Statistics\nBaseline\nConclusion & Future Work" ], "type": "outline" }
1908.05969
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Simplify the Usage of Lexicon in Chinese NER <<<Abstract>>> Recently, many works have tried to utilizing word lexicon to augment the performance of Chinese named entity recognition (NER). As a representative work in this line, Lattice-LSTM \cite{zhang2018chinese} has achieved new state-of-the-art performance on several benchmark Chinese NER datasets. However, Lattice-LSTM suffers from a complicated model architecture, resulting in low computational efficiency. This will heavily limit its application in many industrial areas, which require real-time NER response. In this work, we ask the question: if we can simplify the usage of lexicon and, at the same time, achieve comparative performance with Lattice-LSTM for Chinese NER? ::: Started with this question and motivated by the idea of Lattice-LSTM, we propose a concise but effective method to incorporate the lexicon information into the vector representations of characters. This way, our method can avoid introducing a complicated sequence modeling architecture to model the lexicon information. Instead, it only needs to subtly adjust the character representation layer of the neural sequence model. Experimental study on four benchmark Chinese NER datasets shows that our method can achieve much faster inference speed, comparative or better performance over Lattice-LSTM and its follwees. It also shows that our method can be easily transferred across difference neural architectures. <<</Abstract>>> <<<Introduction>>> Named Entity Recognition (NER) is concerned with identifying named entities, such as person, location, product, and organization names, in unstructured text. In languages where words are naturally separated (e.g., English), NER was conventionally formulated as a sequence labeling problem, and the state-of-the-art results have been achieved by those neural-network-based models BIBREF1, BIBREF2, BIBREF3, BIBREF4. Compared with NER in English, Chinese NER is more difficult since sentences in Chinese are not previously segmented. Thus, one common practice in Chinese NER is first performing word segmentation using an existing CWS system and then applying a word-level sequence labeling model to the segmented sentence BIBREF5, BIBREF6. However, it is inevitable that the CWS system will wrongly segment the query sequence. This will, in turn, result in entity boundary detection errors and even entity category prediction errors in the following NER. Take the character sequence “南京市 (Nanjing) / 长江大桥 (Yangtze River Bridge)" as an example, where “/" indicates the gold segmentation result. If the sequence is segmented into “南京 (Nanjing) / 市长 (mayor) / 江大桥 (Daqiao Jiang)", the word-based NER system is definitely not able to correctly recognize “南京市 (Nanjing)" and “长江大桥 (Yangtze River Bridge)" as two entities of the location type. Instead, it is possible to incorrectly treat “南京 (Nanjing)" as a location entity and predict “江大桥 (Daqiao Jiang)" to be a person's name. Therefore, some works resort to performing Chinese NER directly on the character level, and it has been shown that this practice can achieve better performance BIBREF7, BIBREF8, BIBREF9, BIBREF0. A drawback of the purely character-based NER method is that word information, which has been proved to be useful, is not fully exploited. With this consideration, BIBREF0 proposed to incorporating word lexicon into the character-based NER model. In addition, instead of heuristically choosing a word for the character if it matches multiple words of the lexicon, they proposed to preserving all matched words of the character, leaving the following NER model to determine which matched word to apply. To achieve this, they introduced an elaborate modification to the LSTM-based sequence modeling layer of the LSTM-CRF model BIBREF1 to jointly model the character sequence and all of its matched words. Experimental studies on four public Chinese NER datasets show that Lattice-LSTM can achieve comparative or better performance on Chinese NER over existing methods. Although successful, there exists a big problem in Lattice-LSTM that limits its application in many industrial areas, where real-time NER responses are needed. That is, its model architecture is quite complicated. This slows down its inference speed and makes it difficult to perform training and inference in parallel. In addition, it is far from easy to transfer the structure of Lattice-LSTM to other neural-network architectures (e.g., convolutional neural networks and transformers), which may be more suitable for some specific datasets. In this work, we aim to find a easier way to achieve the idea of Lattice-LSTM, i.e., incorporating all matched words of the sentence to the character-based NER model. The first principle of our method design is to achieve a fast inference speed. To this end, we propose to encoding the matched words, obtained from the lexicon, into the representations of characters. Compared with Lattice-LSTM, this method is more concise and easier to implement. It can avoid complicated model architecture design thus has much faster inference speed. It can also be quickly adapted to any appropriate neural architectures without redesign. Given an existing neural character-based NER model, we only have to modify its character representation layer to successfully introduce the word lexicon. In addition, experimental studies on four public Chinese NER datasets show that our method can even achieve better performance than Lattice-LSTM when applying the LSTM-CRF model. Our source code is published at https://github.com/v-mipeng/LexiconAugmentedNER. <<</Introduction>>> <<<Generic Character-based Neural Architecture for Chinese NER>>> In this section, we provide a concise description of the generic character-based neural NER model, which conceptually contains three stacked layers. The first layer is the character representation layer, which maps each character of a sentence into a dense vector. The second layer is the sequence modeling layer. It plays the role of modeling the dependence between characters, obtaining a hidden representation for each character. The final layer is the label inference layer. It takes the hidden representation sequence as input and outputs the predicted label (with probability) for each character. We detail these three layers below. <<<Character Representation Layer>>> For a character-based Chinese NER model, the smallest unit of a sentence is a character and the sentence is seen as a character sequence $s=\lbrace c_1, \cdots , c_n\rbrace \in \mathcal {V}_c$, where $\mathcal {V}_c$ is the character vocabulary. Each character $c_i$ is represented using a dense vector (embedding): where $\mathbf {e}^{c}$ denotes the character embedding lookup table. <<<Char + bichar.>>> In addition, BIBREF0 has proved that character bigrams are useful for representing characters, especially for those methods not use word information. Therefore, it is common to augment the character representation with bigram information by concatenating bigram embeddings with character embeddings: where $\mathbf {e}^{b}$ denotes the bigram embedding lookup table, and $\oplus $ denotes the concatenation operation. The sequence of character representations $\mathbf {\mathrm {x}}_i^c$ form the matrix representation $\mathbf {\mathrm {x}}^s=\lbrace \mathbf {\mathrm {x}}_1^c, \cdots , \mathbf {\mathrm {x}}_n^c\rbrace $ of $s$. <<</Char + bichar.>>> <<</Character Representation Layer>>> <<<Sequence Modeling Layer>>> The sequence modeling layer models the dependency between characters built on vector representations of the characters. In this work, we explore the applicability of our method to three popular architectures of this layer: the LSTM-based, the CNN-based, and the transformer-based. <<<LSTM-based>>> The bidirectional long-short term memory network (BiLSTM) is one of the most commonly used architectures for sequence modeling BIBREF10, BIBREF3, BIBREF11. It contains two LSTM BIBREF12 cells that model the sequence in the left-to-right (forward) and right-to-left (backward) directions with two distinct sets of parameters. Here, we precisely show the definition of the forward LSTM: where $\sigma $ is the element-wise sigmoid function and $\odot $ represents element-wise product. $\mathbf {\mathrm {\mathrm {W}}} \in {\mathbf {\mathrm {\mathbb {R}}}^{4k_h\times (k_h+k_w)}}$ and $\mathbf {\mathrm {\mathrm {b}}}\in {\mathbf {\mathrm {\mathbb {R}}}^{4k_h}}$ are trainable parameters. The backward LSTM shares the same definition as the forward one but in an inverse sequence order. The concatenated hidden states at the $i^{th}$ step of the forward and backward LSTMs $\mathbf {\mathrm {h}}_i=[\overrightarrow{\mathbf {\mathrm {h}}}_i \oplus \overleftarrow{\mathbf {\mathrm {h}}}_i]$ forms the context-dependent representation of $c_i$. <<</LSTM-based>>> <<<CNN-based>>> Another popular architecture for sequence modeling is the convolution network BIBREF13, which has been proved BIBREF14 to be effective for Chinese NER. In this work, we apply a convolutional layer to model trigrams of the character sequence and gradually model its multigrams by stacking multiple convolutional layers. Specifically, let $\mathbf {\mathrm {h}}^l_i$ denote the hidden representation of $c_i$ in the $l^{th}$ layer with $\mathbf {\mathrm {h}}_i^0=\mathbf {\mathrm {x}}^c_i$, and $\mathbf {\mathrm {F}}^l \in \mathbb {R}^{k_l \times k_c \times 3}$ denote the corresponding filter used in this layer. To obtain the hidden representation $\mathbf {\mathrm {h}}^{l+1}_i$ of $c_i$ in the $(l+1)^{th}$ layer, it takes the convolution of $\mathbf {\mathrm {F}}^l$ over the 3-gram representation: where $\mathbf {\mathrm {h}}^l_{<i-1, i+1>} = [\mathbf {\mathrm {h}}^l_{i-1}; \mathbf {\mathrm {h}}^l_{i}; \mathbf {\mathrm {h}}^l_{i+1}]$ and $\langle A,B \rangle _i=\mbox{Tr}(AB[i, :, :]^T)$. This operation applies $L$ times, obtaining the final context-dependent representation, $\mathbf {\mathrm {h}}_i = \mathbf {\mathrm {h}}_i^L$, of $c_i$. <<</CNN-based>>> <<<Transformer-based>>> Transformer BIBREF15 is originally proposed for sequence transduction, on which it has shown several advantages over the recurrent or convolutional neural networks. Intrinsically, it can also be applied to the sequence labeling task using only its encoder part. In similar, let $\mathbf {\mathrm {h}}^l_i$ denote the hidden representation of $c_i$ in the $l^{th}$ layer with $\mathbf {\mathrm {h}}_i^0=\mathbf {\mathrm {x}}^c_i$, and $f^l$ denote a feedforward module used in this layer. To obtain the hidden representation matrix $\mathbf {\mathrm {h}}^{l+1}$ of $s$ in the $(l+1)^{th}$ layer, it takes the self-attention of $\mathbf {\mathrm {h}}^l$: where $d^l$ is the dimension of $\mathbf {\mathrm {h}}^l_i$. This process applies $L$ times, obtaining $\mathbf {\mathrm {h}}^L$. After that, the position information of each character $c_i$ is introduced into $\mathbf {\mathrm {h}}^L_i$ to obtain its final context-dependent representation $\mathbf {\mathrm {h}}_i$: where $PE_i=sin(i/1000^{2j/d^L}+j\%2\cdot \pi /2)$. We recommend you to refer to the excellent guides “The Annotated Transformer.” for more implementation detail of this architecture. <<</Transformer-based>>> <<</Sequence Modeling Layer>>> <<<Label Inference Layer>>> On top of the sequence modeling layer, a sequential conditional random field (CRF) BIBREF16 layer is applied to perform label inference for the character sequence as a whole: where $\mathcal {Y}_s$ denotes all possible label sequences of $s$, $\phi _{t}({y}^\prime , {y}|\mathbf {\mathrm {s}})=\exp (\mathbf {w}^T_{{y}^\prime , {y}} \mathbf {\mathrm {h}}_t + b_{{y}^\prime , {y}})$, where $\mathbf {w}_{{y}^\prime , {y}}$ and $ b_{{y}^\prime , {y}}$ are trainable parameters corresponding to the label pair $({y}^\prime , {y})$, and $\mathbf {\theta }$ denotes model parameters. For label inference, it searches for the label sequence $\mathbf {\mathrm {y}}^{*}$ with the highest conditional probability given the input sequence ${s}$: which can be efficiently solved using the Viterbi algorithm BIBREF17. <<</Label Inference Layer>>> <<</Generic Character-based Neural Architecture for Chinese NER>>> <<<Lattice-LSTM for Chinese NER>>> Lattice-LSTM designs to incorporate word lexicon into the character-based neural sequence labeling model. To achieve this purpose, it first performs lexicon matching on the input sentence. It will add an directed edge from $c_i$ to $c_j$, if the sub-sequence $\lbrace c_i, \cdots , c_j\rbrace $ of the sentence matches a word of the lexicon for $i < j$. And it preserves all lexicon matching results on a character by allowing the character to connect with multiple characters. Concretely, for a sentence $\lbrace c_1, c_2, c_3, c_4, c_5\rbrace $, if both its sub-sequences $\lbrace c_1, c_2, c_3, c_4\rbrace $ and $\lbrace c_2, c_3, c_4\rbrace $ match a word of the lexicon, it will add a directed edge from $c_1$ to $c_4$ and a directed edge from $c_2$ to $c_4$. This practice will turn the input form of the sentence from a chained sequence into a graph. To model the graph-based input, Lattice-LSTM accordingly modifies the LSTM-based sequence modeling layer. Specifically, let $s_{<*, j>}$ denote the list of sub-sequences of a sentence $s$ that match the lexicon and end with $c_j$, $\mathbf {\mathrm {h}}_{<*, j>}$ denote the corresponding hidden state list $\lbrace \mathbf {\mathrm {h}}_i, \forall s_{<i, j>} \in s_{<*, j>}\rbrace $, and $\mathbf {\mathrm {c}}_{<*, j>}$ denote the corresponding memory cell list $\lbrace \mathbf {\mathrm {c}}_i, \forall s_{<i, j>} \in s_{<*, j>}\rbrace $. In Lattice-LSTM, the hidden state $\mathbf {\mathrm {h}}_j$ and memory cell $\mathbf {\mathrm {c}}_j$ of $c_j$ are now updated by: where $f$ is a simplified representation of the function used by Lattice-LSTM to perform memory update. Note that, in the updating process, the inputs now contains current step character representation $\mathbf {\mathrm {x}}_j^c$, last step hidden state $\mathbf {\mathrm {h}}_{j-1}$ and memory cell $\mathbf {\mathrm {c}}_{j-1}$, and lexicon matched sub-sequences $s_{<*, j>}$ and their corresponding hidden state and memory cell lists, $\mathbf {\mathrm {h}}_{<*, j>}$ and $\mathbf {\mathrm {c}}_{<*, j>}$. We refer you to the paper of Lattice-LSTM BIBREF0 for more detail of the implementation of $f$. A problem of Lattice-LSTM is that its speed of sequence modeling is much slower than the normal LSTM architecture since it has to additionally model $s_{<*, j>}$, $\mathbf {\mathrm {h}}_{<*, j>}$, and $\mathbf {\mathrm {c}}_{<*, j>}$ for memory update. In addition, considering the implementation of $f$, it is hard for Lattice-LSTM to process multiple sentences in parallel (in the published implementation of Lattice-LSTM, the batch size was set to 1). This raises the necessity to design a simpler way to achieve the function of Lattice-LSTM for incorporating the word lexicon into the character-based NER model. <<</Lattice-LSTM for Chinese NER>>> <<<Proposed Method>>> In this section, we introduce our method, which aims to keep the merit of Lattice-LSTM and at the same time, make the computation efficient. We will start the description of our method from our thinking on Lattice-LSTM. From our view, the advance of Lattice-LSTM comes from two points. The first point is that it preserve all possible matching words for each character. This can avoid the error propagation introduced by heuristically choosing a matching result of the character to the NER system. The second point is that it can introduce pre-trained word embeddings to the system, which bring great help to the final performance. While the disadvantage of Lattice-LSTM is that it turns the input form of a sentence from a chained sequence into a graph. This will greatly increase the computational cost for sentence modeling. Therefore, the design of our method should try to keep the chained input form of the sentence and at the same time, achieve the above two advanced points of Lattice-LSTM. With this in mind, our method design was firstly motivated by the Softword technique, which was originally used for incorporating word segmentation information into downstream tasks BIBREF18, BIBREF19. Precisely, the Softword technique augments the representation of a character with the embedding of its corresponding segmentation label: Here, $seg(c_j) \in \mathcal {Y}_{seg}$ denotes the segmentation label of the character $c_j$ predicted by the word segmentor, $\mathbf {e}^{seg}$ denotes the segmentation label embedding lookup table, and commonly $\mathcal {Y}_{seg}=\lbrace \text{B}, \text{M}, \text{E}, \text{S}\rbrace $ with B, M, E indicating that the character is the beginning, middle, and end of a word, respectively, and S indicating that the character itself forms a single-character word. The first idea we come out based on the Softword technique is to construct a word segmenter using the lexicon and allow a character to have multiple segmentation labels. Take the sentence $s=\lbrace c_1, c_2, c_3, c_4, c_5\rbrace $ as an example. If both its sub-sequences $\lbrace c_1, c_2, c_3, c_4\rbrace $ and $\lbrace c_3, c_4\rbrace $ match a word of the lexicon, then the segmentation label sequence of $s$ using the lexicon is $segs(s)=\lbrace \lbrace \text{B}\rbrace , \lbrace \text{M}\rbrace , \lbrace \text{B}, \text{M}\rbrace , \lbrace \text{E}\rbrace , \lbrace \text{O}\rbrace \rbrace $. Here, $segs(s)_1=\lbrace \text{B}\rbrace $ indicates that there is at least one sub-sequence of $s$ matching a word of the lexicon and beginning with $c_1$, $segs(s)_3=\lbrace \text{B}, \text{M}\rbrace $ means that there is at least one sub-sequence of $s$ matching the lexicon and beginning with $c_3$ and there is also at least one lexicon matched sub-sequence in the middle of which $c_3$ occurs, and $segs(s)_5=\lbrace \text{O}\rbrace $ means that there is no sub-sequence of $s$ that matches the lexicon and contains $c_5$. The character representation is then obtained by: where $\mathbf {e}^{seg}(segs(s)_j)$ is a 5-dimensional binary vector with each dimension corresponding to an item of $\lbrace \text{B, M, E, S, O\rbrace }$. We call this method as ExSoftword in the following. However, through the analysis of ExSoftword, we can find out that the ExSoftword method cannot fully inherit the two merits of Lattice-LSTM. Firstly, it cannot not introduce pre-trained word embeddings. Secondly, though it tries to keep all the lexicon matching results by allowing a character to have multiple segmentation labels, it still loses lots of information. In many cases, we cannot restore the matching results from the segmentation label sequence. Consider the case that in the sentence $s=\lbrace c_1, c_2, c_3, c_4\rbrace $, $\lbrace c_1, c_2, c_3\rbrace $ and $\lbrace c_2, c_3, c_4\rbrace $ match the lexicon. In this case, $segs(s) = \lbrace \lbrace \text{B}\rbrace , \lbrace \text{B}, \text{M}\rbrace , \lbrace \text{M}, \text{E}\rbrace , \lbrace \text{E}\rbrace \rbrace $. However, based on $segs(s)$ and $s$, we cannot say that it is $\lbrace c_1, c_2, c_3\rbrace $ and $\lbrace c_2, c_3, c_4\rbrace $ matching the lexicon since we will obtain the same segmentation label sequence when $\lbrace c_1, c_2, c_3, c_4\rbrace $ and $\lbrace c_2,c_3\rbrace $ match the lexicon. To this end, we propose to preserving not only the possible segmentation labels of a character but also their corresponding matched words. Specifically, in this improved method, each character $c$ of a sentence $s$ corresponds to four word sets marked by the four segmentation labels “BMES". The word set $\rm {B}(c)$ consists of all lexicon matched words on $s$ that begin with $c$. Similarly, $\rm {M}(c)$ consists of all lexicon matched words in the middle of which $c$ occurs, $\rm {E}(c)$ consists of all lexicon matched words that end with $c$, and $\rm {S}(c)$ is the single-character word comprised of $c$. And if a word set is empty, we will add a special word “NONE" to it to indicate this situation. Consider the sentence $s=\lbrace c_1, \cdots , c_5\rbrace $ and suppose that $\lbrace c_1, c_2\rbrace $, $\lbrace c_1, c_2, c_3\rbrace $, $\lbrace c_2, c_3, c_4\rbrace $, and $\lbrace c_2, c_3, c_4, c_5\rbrace $ match the lexicon. Then, for $c_2$, $\rm {B}(c_2)=\lbrace \lbrace c_2, c_3, c_4\rbrace , \lbrace c_2, c_3, c_4, c_5\rbrace \rbrace $, $\rm {M}(c_2)=\lbrace \lbrace c_1, c_2, c_3\rbrace \rbrace $, $\rm {E}(c_2)=\lbrace \lbrace c_1, c_2\rbrace \rbrace $, and $\rm {S}(c_2)=\lbrace NONE\rbrace $. In this way, we can now introduce the pre-trained word embeddings and moreover, we can exactly restore the matching results from the word sets of each character. The next step of the improved method is to condense the four word sets of each character into a fixed-dimensional vector. In order to retain information as much as possible, we choose to concatenate the representations of the four word sets to represent them as a whole and add it to the character representation: Here, $\mathbf {v}^s$ denotes the function that maps a single word set to a dense vector. This also means that we should map each word set into a fixed-dimensional vector. To achieve this purpose, we first tried the mean-pooling algorithm to get the vector representation of a word set $\mathcal {S}$: Here, $\mathbf {e}^w$ denotes the word embedding lookup table. However, the empirical studies, as depicted in Table TABREF31, show that this algorithm performs not so well . Through the comparison with Lattice-LSTM, we find out that in Lattice-LSTM, it applies a dynamic attention algorithm to weigh each matched word related to a single character. Motivated by this practice, we propose to weighing the representation of each word in the word set to get the pooling representation of the word set. However, considering the computational efficiency, we do not want to apply a dynamical weighing algorithm, like attention, to get the weight of each word. With this in mind, we propose to using the frequency of the word as an indication of its weight. The basic idea beneath this algorithm is that the more times a character sequence occurs in the data, the more likely it is a word. Note that, the frequency of a word is a static value and can be obtained offline. This can greatly accelerate the calculation of the weight of each word (e.g., using a lookup table). Specifically, let $w_c$ denote the character sequence constituting $w$ and $z(w)$ denote the frequency of $w_c$ occurring in the statistic data set (in this work, we combine training and testing data of a task to construct the statistic data set. Of course, if we have unlabelled data for the task, we can take the unlabeled data as the statistic data set). Note that, we do not add the frequency of $w$ if $w_c$ is covered by that of another word of the lexicon in the sentence. For example, suppose that the lexicon contains both “南京 (Nanjing)" and “南京市 (Nanjing City)". Then, when counting word frequency on the sequence “南京市长江大桥", we will not add the frequency of “南京" since it is covered by “南京市" in the sequence. This can avoid the situation that the frequency of “南京" is definitely higher than “南京市". Finally, we get the weighted representation of the word set $\mathcal {S}$ by: where Here, we perform weight normalization on all words of the four word sets to allow them compete with each other across sets. Further, we have tried to introducing a smoothing to the weight of each word to increase the weights of infrequent words. Specifically, we add a constant $c$ into the frequency of each word and re-define $\mathbf {v}^s$ by: where We set $c$ to the value that there are 10% of training words occurring less than $c$ times within the statistic data set. In summary, our method mainly contains the following four steps. Firstly, we scan each input sentence with the word lexicon, obtaining the four 'BMES' word sets for each character of the sentence. Secondly, we look up the frequency of each word counted on the statistic data set. Thirdly, we obtain the vector representation of the four word sets of each character according to Eq. (DISPLAY_FORM22), and add it to the character representation according to Eq. (DISPLAY_FORM20). Finally, based on the augmented character representations, we perform sequence labeling using any appropriate neural sequence labeling model, like LSTM-based sequence modeling layer + CRF label inference layer. <<</Proposed Method>>> <<<Experiments>>> <<<Experiment Design>>> Firstly, we performed a development study on our method with the LSTM-based sequence modeling layer, in order to compare the implementations of $\mathbf {v}^s$ and to determine whether or not to use character bigrams in our method. Decision made in this step will be applied to the following experiments. Secondly, we verified the computational efficiency of our method compared with Lattice-LSTM and LR-CNN BIBREF20, which is a followee of Lattice-LSTM for faster inference speed. Thirdly, we verified the effectiveness of our method by comparing its performance with that of Lattice-LSTM and other comparable models on four benchmark Chinese NER data sets. Finally, we verified the applicability of our method to different sequence labeling models. <<</Experiment Design>>> <<<Experiment Setup>>> Most experimental settings in this work follow the protocols of Lattice-LSTM BIBREF0, including tested datasets, compared baselines, evaluation metrics (P, R, F1), and so on. To make this work self-completed, we concisely illustrate some primary settings of this work. <<<Datasets>>> The methods were evaluated on four Chinese NER datasets, including OntoNotes BIBREF21, MSRA BIBREF22, Weibo NER BIBREF23, BIBREF24, and Resume NER BIBREF0. OntoNotes and MSRA are from the newswire domain, where gold-standard segmentation is available for training data. For OntoNotes, gold segmentation is also available for development and testing data. Weibo NER and Resume NER are from social media and resume, respectively. There is no gold standard segmentation in these two datasets. Table TABREF26 shows statistic information of these datasets. As for the lexicon, we used the same one as Lattice-LSTM, which contains 5.7k single-character words, 291.5k two-character words, 278.1k three-character words, and 129.1k other words. <<</Datasets>>> <<<Implementation Detail>>> When applying the LSTM-based sequence modeling layer, we followed most implementation protocols of Lattice-LSTM, including character and word embedding sizes, dropout, embedding initialization, and LSTM layer number. The hidden size was set to 100 for Weibo and 256 for the rest three datasets. The learning rate was set to 0.005 for Weibo and Resume and 0.0015 for OntoNotes and MSRA with Adamax BIBREF25. When applying the CNN- and transformer- based sequence modeling layers, most hyper-parameters were the same as those used in the LSTM-based model. In addition, the layer number $L$ for the CNN-based model was set to 4, and that for transformer-based model was set to 2 with h=4 parallel attention layers. Kernel number $k_f$ of the CNN-based model was set to 512 for MSRA and 128 for the other datasets in all layers. <<</Implementation Detail>>> <<</Experiment Setup>>> <<<Development Experiments>>> In this experiment, we compared the implementations of $\mathbf {v}^s$ with the LSTM-based sequence modeling layer. In addition, we study whether or not character bigrams can bring improvement to our method. Table TABREF31 shows performance of three implementations of $\mathbf {v}^s$ without using character bigrams. From the table, we can see that the weighted pooling algorithm performs generally better than the other two implementations. Of course, we may obtain better results with the smoothed weighted pooling algorithm by reducing the value of $c$ (when $c=0$, it is equivalent to the weighted pooling algorithm). We did not do so for two reasons. The first one is to guarantee the generality of our system for unexplored tasks. The second one is that the performance of the weighted pooling algorithm is good enough compared with other state-of-the-art baselines. Therefore, in the following experiments, we in default applied the weighted pooling algorithm to implement $\mathbf {v}^s$. Figure FIGREF32 shows the F1-score of our method against the number of training iterations when using character bigram or not. From the figure, we can see that additionally introducing character bigrams cannot bring considerable improvement to our method. A possible explanation of this phenomenon is that the introduced word information by our proposed method has covered the bichar information. Therefore, in the following experiments, we did not use bichar in our method. <<</Development Experiments>>> <<<Computational Efficiency Study>>> Table TABREF34 shows the inference speed of our method when implementing the sequnece modeling layer with the LSTM-based, CNN-based, and Transformer-based architecture, respectively. The speed was evaluated by average sentences per second using a GPU (NVIDIA TITAN X). For a fair comparison with Lattice-LSTM and LR-CNN, we set the batch size of our method to 1 at inference time. From the table, we can see that our method has a much faster inference speed than Lattice-LSTM when using the LSTM-based sequence modeling layer, and it was also much faster than LR-CNN, which used an CNN architecture to implement the sequence modeling layer. And as expected, our method with the CNN-based sequence modeling layer showed some advantage in inference speed than those with the LSTM-based and Transformer-based sequence model layer. <<</Computational Efficiency Study>>> <<<Effectiveness Study>>> Table TABREF37$-$TABREF43 show the performance of method with the LSTM-based sequence modeling layer compared with Lattice-LSTM and other comparative baselines. <<<OntoNotes.>>> Table TABREF37 shows results on OntoNotes, which has gold segmentation for both training and testing data. The methods of the “Gold seg" and "Auto seg" group are word-based that build on the gold word segmentation results and the automatic segmentation results, respectively. The automatic segmentation results were generated by the segmenter trained on training data of OntoNotes. Methods of the "No seg" group are character-based. From the table, we can obtain several informative observations. First, by replacing the gold segmentation with the automatically generated segmentation, the F1-score of the Word-based (LSTM) + char + bichar model decreased from 75.77% to 71.70%. This shows the problem of the practice that treats the predicted word segmentation result as the true one for the word-based Chinese NER. Second, the Char-based (LSTM)+bichar+ExSoftword model achieved a 71.89% to 72.40% improvement over the Char-based (LSTM)+bichar+softword baseline on the F1-score. This indicates the feasibility of the naive extension of ExSoftword to softword. However, it still greatly underperformed Lattice-LSTM, showing its deficiency in utilizing word information. Finally, our proposed method, which is a further extension of Exsoftword, obtained a statistically significant improvement over Lattice-LSTM and even performed similarly to those word-based methods with gold segmentation, verifying its effectiveness on this data set. <<</OntoNotes.>>> <<<MSRA.>>> Table TABREF40 shows results on MSRA. The word-based methods were built on the automatic segmentation results generated by the segmenter trained on training data of MSRA. Compared methods included the best statistical models on this data set, which leveraged rich handcrafted features BIBREF28, BIBREF29, BIBREF30, character embedding features BIBREF31, and radical features BIBREF32. From the table, we observe that our method obtained a statistically significant improvement over Lattice-LSTM and other comparative baselines on the recall and F1-score, verifying the effectiveness of our method on this data set. <<</MSRA.>>> <<<Weibo/Resume.>>> Table TABREF42 shows results on Weibo NER, where NE, NM, and Overall denote F1-scores for named entities, nominal entities (excluding named entities) and both, respectively. The existing state-of-the-art system BIBREF19 explored rich embedding features, cross-domain data, and semi-supervised data. From the table, we can see that our proposed method achieved considerable improvement over the compared baselines on this data set. Table TABREF43 shows results on Resume. Consistent with observations on the other three tested data sets, our proposed method significantly outperformed Lattice-LSTM and the other comparable methods on this data set. <<</Weibo/Resume.>>> <<</Effectiveness Study>>> <<<Transferability Study>>> Table TABREF46 shows performance of our method with different sequence modeling architectures. From the table, we can first see that the LSTM-based architecture performed better than the CNN- and transformer- based architectures. In addition, our methods with different sequence modeling layers consistently outperformed their corresponding ExSoftword baselines. This shows that our method is applicable to different neural sequence modeling architectures for exploiting lexicon information. <<</Transferability Study>>> <<</Experiments>>> <<<Conclusion>>> In this work, we address the computational efficiency for utilizing word lexicon in Chinese NER. To achieve a high-performing NER system with fast inference speed, we proposed to adding lexicon information into the character representation and keeping the input form of a sentence as a chained sequence. Experimental study on four benchmark Chinese NER datasets shows that our method can obtain faster inference speed than the comparative methods and at the same time, achieve high performance. It also shows that our methods can apply to different neural sequence labeling models for Chinese NER. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nGeneric Character-based Neural Architecture for Chinese NER\nCharacter Representation Layer\nChar + bichar.\nSequence Modeling Layer\nLSTM-based\nCNN-based\nTransformer-based\nLabel Inference Layer\nLattice-LSTM for Chinese NER\nProposed Method\nExperiments\nExperiment Design\nExperiment Setup\nDatasets\nImplementation Detail\nDevelopment Experiments\nComputational Efficiency Study\nEffectiveness Study\nOntoNotes.\nMSRA.\nWeibo/Resume.\nTransferability Study\nConclusion" ], "type": "outline" }
1910.13215
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Transformer-based Cascaded Multimodal Speech Translation <<<Abstract>>> This paper describes the cascaded multimodal speech translation systems developed by Imperial College London for the IWSLT 2019 evaluation campaign. The architecture consists of an automatic speech recognition (ASR) system followed by a Transformer-based multimodal machine translation (MMT) system. While the ASR component is identical across the experiments, the MMT model varies in terms of the way of integrating the visual context (simple conditioning vs. attention), the type of visual features exploited (pooled, convolutional, action categories) and the underlying architecture. For the latter, we explore both the canonical transformer and its deliberation version with additive and cascade variants which differ in how they integrate the textual attention. Upon conducting extensive experiments, we found that (i) the explored visual integration schemes often harm the translation performance for the transformer and additive deliberation, but considerably improve the cascade deliberation; (ii) the transformer and cascade deliberation integrate the visual modality better than the additive deliberation, as shown by the incongruence analysis. <<</Abstract>>> <<<Introduction>>> The recently introduced How2 dataset BIBREF2 has stimulated research around multimodal language understanding through the availability of 300h instructional videos, English subtitles and their Portuguese translations. For example, BIBREF3 successfully demonstrates that semantically rich action-based visual features are helpful in the context of machine translation (MT), especially in the presence of input noise that manifests itself as missing source words. Therefore, we hypothesize that a speech-to-text translation (STT) system may also benefit from the visual context, especially in the traditional cascaded framework BIBREF4, BIBREF5 where noisy automatic transcripts are obtained from an automatic speech recognition system (ASR) and further translated into the target language using a machine translation (MT) component. The dataset enables the design of such multimodal STT systems, since we have access to a bilingual corpora as well as the corresponding audio-visual stream. Hence, in this paper, we propose a cascaded multimodal STT with two components: (i) an English ASR system trained on the How2 dataset and (ii) a transformer-based BIBREF0 visually grounded MMT system. MMT is a relatively new research topic which is interested in leveraging auxiliary modalities such as audio or vision in order to improve translation performance BIBREF6. MMT has proved effective in scenarios such as for disambiguation BIBREF7 or when the source sentences are corrupted BIBREF8. So far, MMT has mostly focused on integrating visual features into neural MT (NMT) systems using visual attention through convolutional feature maps BIBREF9, BIBREF10 or visual conditioning of encoder/decoder blocks through fully-connected features BIBREF11, BIBREF12, BIBREF13, BIBREF14. Inspired by previous research in MMT, we explore several multimodal integration schemes using action-level video features. Specifically, we experiment with visually conditioning the encoder output and adding visual attention to the decoder. We further extend the proposed schemes to the deliberation variant BIBREF1 of the canonical transformer in two ways: additive and cascade multimodal deliberation, which are distinct in their textual attention regimes. Overall, the results show that multimodality in general leads to performance degradation for the canonical transformer and the additive deliberation variant, but can result in substantial improvements for the cascade deliberation. Our incongruence analysis BIBREF15 reveals that the transformer and cascade deliberation are more sensitive to and therefore more reliant on visual features for translation, whereas the additive deliberation is much less impacted. We also observe that incongruence sensitivity and translation performance are not necessarily correlated. <<</Introduction>>> <<<Methods>>> In this section, we briefly describe the proposed multimodal speech translation system and its components. <<<Automatic Speech Recognition>>> The baseline ASR system that we use to obtain English transcripts is an attentive sequence-to-sequence architecture with a stacked encoder of 6 bidirectional LSTM layers BIBREF16. Each LSTM layer is followed by a tanh projection layer. The middle two LSTM layers apply a temporal subsampling BIBREF17 by skipping every other input, reducing the length of the sequence $\mathrm {X}$ from $T$ to $T/4$. All LSTM and projection layers have 320 hidden units. The forward-pass of the encoder produces the source encodings on top of which attention will be applied within the decoder. The hidden and cell states of all LSTM layers are initialized with 0. The decoder is a 2-layer stacked GRU BIBREF18, where the first GRU receives the previous hidden state of the second GRU in a transitional way. GRU layers, attention layer and embeddings have 320 hidden units. We share the input and output embeddings to reduce the number of parameters BIBREF19. At timestep $t\mathrm {=}0$, the hidden state of the first GRU is initialized with the average-pooled source encoding states. <<</Automatic Speech Recognition>>> <<<Deliberation-based NMT>>> A human translator typically produces a translation draft first, and then refines it towards the final translation. The idea behind the deliberation networks BIBREF20 simulates this process by extending the conventional attentive encoder-decoder architecture BIBREF21 with a second pass refinement decoder. Specifically, the encoder first encodes a source sentence of length $N$ into a sequence of hidden states $\mathcal {H} = \lbrace h_1, h_2,\dots ,h_{N}\rbrace $ on top of which the first pass decoder applies the attention. The pre-softmax hidden states $\lbrace \hat{s}_1,\hat{s}_2,\dots ,\hat{s}_{M}\rbrace $ produced by the decoder leads to a first pass translation $\lbrace \hat{y}_1,\hat{y}_2,\dots , \hat{y}_{M}\rbrace $. The second pass decoder intervenes at this point and generates a second translation by attending separately to both $\mathcal {H}$ and the concatenated state vectors $\lbrace [\hat{s}_1;\hat{y}_1], [\hat{s}_2; \hat{y}_2],\dots ,[\hat{s}_{M}; \hat{y}_{M}]\rbrace $. Two context vectors are produced as a result, and they are joint inputs with $s_{t-1}$ (previous hidden state of ) and $y_{t-1}$ (previous output of ) to to yield $s_t$ and then $y_t$. A transformer-based deliberation architecture is proposed by BIBREF1. It follows the same two-pass refinement process, with every second-pass decoder block attending to both the encoder output $\mathcal {H}$ and the first-pass pre-softmax hidden states $\mathcal {\hat{S}}$. However, it differs from BIBREF20 in that the actual first-pass translation $\hat{Y}$ is not used for the second-pass attention. <<</Deliberation-based NMT>>> <<<Multimodality>>> <<<Visual Features>>> We experiment with three types of video features, namely average-pooled vector representations (), convolutional layer outputs (), and Ten-Hot action category embeddings (). The features are provided by the How2 dataset using the following approach: a video is segmented into smaller parts of 16 frames each, and the segments are fed to a 3D ResNeXt-101 CNN BIBREF22, trained to recognise 400 action classes BIBREF23. The 2048-D fully-connected features are then averaged across the segments to obtain a single feature vector for the overall video. In order to obtain the features, 16 equi-distant frames are sampled from a video, and they are then used as input to an inflated 3D ResNet-50 CNN BIBREF24 fine-tuned on the Moments in Time action video dataset. The CNN hence takes in a video and classifies it into one of 339 categories. The features, taken at the CONV$_4$ layer of the network, has a $7 \times 7 \times 2048$ dimensionality. Higher-level semantic information can be more helpful than convolutional features. We apply the same CNN to a video as we do for features, but this time the focus is on the softmax layer output: we process the embedding matrix to keep the 10 most probable category embeddings intact while zeroing out the remaining ones. We call this representation ten-hot action category embeddings (). <<</Visual Features>>> <<<Integration Approaches>>> Encoder with Additive Visual Conditioning (-) In this approach, inspired by BIBREF7, we add a projection of the visual features to each output of the vanilla transformer encoder (-). This projection is strictly linear from the 2048-D features to the 1024-D space in which the self attention hidden states reside, and the projection matrix is learned jointly with the translation model. Decoder with Visual Attention (-) In order to accommodate attention to visual features at the decoder side and inspired by BIBREF25, we insert one layer of visual cross attention at a decoder block immediately before the fully-connected layer. We name the transformer decoder with such an extra layer as –, where this layer is immediately after the textual attention to the encoder output. Specifically, we experiment with attention to , and features separately. The visual attention is distributed across the 49 video regions in , the 339 action category word embeddings in , or the 32 rows in where we reshape the 2048-D vector into a $32 \times 64$ matrix. <<</Integration Approaches>>> <<<Multimodal Transformers>>> The vanilla text-only transformer (-) is used as a baseline, and we design two variants: with additive visual conditioning (-) and with attention to visual features (-). A -features a -and a vanilla transformer decoder (-), therefore utilising visual information only at the encoder side. In contrast, a -is configured with a -and a –, exploiting visual cues only at the decoder. Figure FIGREF7 summarises the two approaches. <<</Multimodal Transformers>>> <<<Multimodal Deliberation>>> Our multimodal deliberation models differ from each other in two ways: whether to use additive () BIBREF7 or cascade () textual deliberation to integrate the textual attention to the original input and to the first pass, and whether to employ visual attention (-) or additive visual conditioning (-) to integrate the visual features into the textual MT model. Figures FIGREF9 and FIGREF10 show the configurations of our additive and cascade deliberation models, respectively, each also showing the connections necessary for -and -. Additive () & Cascade () Textual Deliberation In an additive-deliberation second-pass decoder (–) block, the first layer is still self-attention, whereas the second layer is the addition of two separate attention sub-layers. The first sub-layer attends to the encoder output in the same way -does, while the attention of the second sub-layer is distributed across the concatenated first pass outputs and hidden states. The input to both sub-layers is the output of the self-attention layer, and the outputs of the sub-layers are summed as the final output and then (with a residual connection) fed to the visual attention layer if the decoder is multimodal or to the fully connected layer otherwise. For the cascade version, the only difference is that, instead of two sub-layers, we have two separate, successive layers with the same functionalities. It is worth mentioning that we introduce the attention to the first pass only at the initial three decoder blocks out of the total six of the second pass decoder (-), following BIBREF7. Additive Visual Conditioning (-) & Visual Attention (-) -and -are simply applying -and -respectively to a deliberation model, therefore more details have been introduced in Section SECREF5. For -, similar to in -, we add a projection of the visual features to the output of -, and use -as the first pass decoder and either additive or cascade deliberation as the -. For -, in a similar vein as -, the encoder in this setting is simply -and the first pass decoder is just -, but this time -is responsible for attending to the first pass output as well as the visual features. For both additive and cascade deliberation, a visual attention layer is inserted immediately before the fully-connected layer, so that the penultimate layer of a decoder block now attends to visual information. <<</Multimodal Deliberation>>> <<</Multimodality>>> <<</Methods>>> <<<Experiments>>> <<<Dataset>>> We stick to the default training/validation/test splits and the pre-extracted speech features for the How2 dataset, as provided by the organizers. As for the pre-processing, we lowercase the sentences and then tokenise them using Moses BIBREF26. We then apply subword segmentation BIBREF27 by learning separate English and Portuguese models with 20,000 merge operations each. The English corpus used when training the subword model consists of both the ground-truth video subtitles and the noisy transcripts produced by the underlying ASR system. We do not share vocabularies between the source and target domains. Finally for the post-processing step, we merge the subword tokens, apply recasing and detokenisation. The recasing model is a standard Moses baseline trained again on the parallel How2 corpus. The baseline ASR system is trained on the How2 dataset as well. This system is then used to obtain noisy transcripts for the whole dataset, using beam-search with beam size of 10. The pre-processing pipeline for the ASR is different from the MT pipeline in the sense that the punctuations are removed and the subword segmentation is performed using SentencePiece BIBREF28 with a vocabulary size of 5,000. The test-set performance of this ASR is around 19% WER. <<</Dataset>>> <<<Training>>> We train our transformer and deliberation models until convergence largely with transformer_big hyperparameters: 16 attention heads, 1024-D hidden states and a dropout of 0.1. During inference, we apply beam-search with beam size of 10. For deliberation, we first train the underlying transformer model until convergence, and use its weights to initialise the encoder and the first pass decoder. After freezing those weights, we train -until convergence. The reason for the partial freezing is that our preliminary experiments showed that it enabled better performance compared to updating the whole model. Following BIBREF20, we obtain 10-best samples from the first pass with beam-search for source augmentation during the training of -. We train all the models on an Nvidia RTX 2080Ti with a batch size of 1024, a base learning rate of 0.02 with 8,000 warm-up steps for the Adam BIBREF29 optimiser, and a patience of 10 epochs for early stopping based on approx-BLEU () for the transformers and 3 epochs for the deliberation models. After the training finishes, we evaluate all the checkpoints on the validation set and compute the real BIBREF30 scores, based on which we select the best model for inference on the test set. The transformer and the deliberation models are based upon the library BIBREF31 (v1.3.0 RC1) as well as the vanilla transformer-based deliberation BIBREF20 and their multimodal variants BIBREF7. <<</Training>>> <<</Experiments>>> <<<Results & Analysis>>> <<<Quantitative Results>>> We report tokenised results obtained using the multeval toolkit BIBREF32. We focus on single system performance and thus, do not perform any ensembling or checkpoint averaging. The scores of the models are shown in Table TABREF17. Evident from the table is that the best models overall are -and –with a score of 39.8, and the other multimodal transformers have slightly worse performance, showing score drops around 0.1. Also, none of the multimodal transformer systems are significantly different from the baseline, which is a sign of the limited extent to which visual features affect the output. For additive deliberation (-), the performance variation is considerably larger: -and take the lead with 37.6 , but the next best system (-) plunges to 37.2. The other two (-& -) also have noticeably worse results (36.0 and 37.0). Overall, however, -is still similar to the transformers in that the baseline generally yields higher-quality translations. Cascade deliberation, on the other hand, is different in that its text-only baseline is outperformed by most of its multimodal counterparts. Multimodality enables boosts as large as around 1 point in the cases of -and -, both of which achieve about 37.4 and are significantly different from the baseline. Another observation is that the deliberation models as a whole lead to worse performance than the canonical transformers, with deterioration ranging from 2.3 (across -variants) to 3.5 (across -systems), which defies the findings of BIBREF7. We leave this to future investigations. <<</Quantitative Results>>> <<<Incongruence Analysis>>> To further probe the effect of multimodality, we follow the incongruent decoding approach BIBREF15, where our multimodal models are fed with mismatched visual features. The general assumption is that a model will have learned to exploit visual information to help with its translation, if it shows substantial performance degradation when given wrong visual features. The results are reported in Table TABREF19. Overall, there are considerable parallels between the transformers and the cascade deliberation models in terms of the incongruence effect, such as universal performance deterioration (ranging from 0.1 to 0.6 ) and more noticeable score changes ($\downarrow $ 0.5 for –and $\downarrow $ 0.6 for —) in the -setting compared to the other scenarios. Additive deliberation, however, manifests a drastically different pattern, showing almost no incongruence effect for -, only a 0.2 decrease for -, and even a 0.1 boost for -and -. Therefore, the determination can be made that and -models are considerably more sensitive to incorrect visual information than -, which means the former better utilise visual clues during translation. Interestingly, the extent of performance degradation caused by incongruence is not necessarily correlated with the congruent scores. For example, –is on par with –in congruent decoding (differing by around 0.1 ), but the former suffers only a 0.1-loss with incongruence whereas the figure for the latter is 0.4, in addition to the fact that the latter becomes significantly different after incongruent decoding. This means that some multimodal models that are sensitive to incongruence likely complement visual attention with textual attention but without getting higher-quality translation as a result. The differences between the multimodal behaviour of additive and cascade deliberation also warrant more investigation, since the two types of deliberation are identical in their utilisation of visual features and only vary in their handling of the textual attention to the outputs of the encoder and the first pass decoder. <<</Incongruence Analysis>>> <<</Results & Analysis>>> <<<Conclusions>>> We explored a series of transformers and deliberation based models to approach cascaded multimodal speech translation as our participation in the How2-based speech translation task of IWSLT 2019. We submitted the –system, which is a canonical transformer with visual attention over the convolutional features, as our primary system with the remaining ones marked as contrastive ones. The primary system obtained a of 39.63 on the public IWSLT19 test set, whereas -, the top contrastive system on the same set, achieved 39.85. Our main conclusions are as follows: (i) the visual modality causes varying levels of translation quality damage to the transformers and additive deliberation, but boosts cascade deliberation; (ii) the multimodal transformers and cascade deliberation show performance degradation due to incongruence, but additive deliberation is not as affected; (iii) there is no strict correlation between incongruence sensitivity and translation performance. <<</Conclusions>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nMethods\nAutomatic Speech Recognition\nDeliberation-based NMT\nMultimodality\nVisual Features\nIntegration Approaches\nMultimodal Transformers\nMultimodal Deliberation\nExperiments\nDataset\nTraining\nResults & Analysis\nQuantitative Results\nIncongruence Analysis\nConclusions" ], "type": "outline" }
1912.00159
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Automatic Creation of Text Corpora for Low-Resource Languages from the Internet: The Case of Swiss German <<<Abstract>>> This paper presents SwissCrawl, the largest Swiss German text corpus to date. Composed of more than half a million sentences, it was generated using a customized web scraping tool that could be applied to other low-resource languages as well. The approach demonstrates how freely available web pages can be used to construct comprehensive text corpora, which are of fundamental importance for natural language processing. In an experimental evaluation, we show that using the new corpus leads to significant improvements for the task of language modeling. To capture new content, our approach will run continuously to keep increasing the corpus over time. <<</Abstract>>> <<<Introduction>>> Swiss German (“Schwyzerdütsch” or “Schwiizertüütsch”, abbreviated “GSW”) is the name of a large continuum of dialects attached to the Germanic language tree spoken by more than 60% of the Swiss population BIBREF0. Used every day from colloquial conversations to business meetings, Swiss German in its written form has become more and more popular in recent years with the rise of blogs, messaging applications and social media. However, the variability of the written form is rather large as orthography is more based on local pronunciations and emerging conventions than on a unique grammar. Even though Swiss German is widely spread in Switzerland, there are still few natural language processing (NLP) corpora, studies or tools available BIBREF1. This lack of resources may be explained by the small pool of speakers (less than one percent of the world population), but also the many intrinsic difficulties of Swiss German, including the lack of official writing rules, the high variability across different dialects, and the informal context in which texts are commonly written. Furthermore, there is no official top-level domain (TLD) for Swiss German on the Internet, which renders the automatic collection of Swiss German texts more difficult. To automate the treatment of Swiss German and foster its adoption in online services such as automatic speech recognition (ASR), we gathered the largest corpus of written Swiss German to date by crawling the web using a customized tool. We highlight the difficulties for finding Swiss German on the web and demonstrate in an experimental evaluation how our text corpus can be used to significantly improve an important NLP task that is a fundamental part of the ASR process: language modeling. <<</Introduction>>> <<<Related Work>>> Few GSW corpora already exists. Although they are very valuable for research on specific aspects of the Swiss German language, they are either highly specialized BIBREF2 BIBREF3 BIBREF4, rather small BIBREF1 (7,305 sentences), or do not offer full sentences BIBREF5. To our knowledge, the only comprehensive written Swiss German corpus to date comes from the Leipzig corpora collection initiative BIBREF6 offering corpora for more than 136 languages. The Swiss German data has two sources: the Alemannic Wikipedia and web crawls on the .ch domain in 2016 and 2017, leading to a total of 175,399 unique sentences. While the Leipzig Web corpus for Swiss German is of considerable size, we believe this number does not reflect the actual amount of GSW available on the Internet. Furthermore, the enforced sentence structures do not represent the way Swiss German speakers write online. In this paper, we thus aim at augmenting the Leipzig Web corpus by looking further than the .ch domain and by using a suite of tools specifically designed for retrieving Swiss German. The idea of using the web as a vast source of linguistic data has been around for decades BIBREF7 and many authors have already addressed its importance for low-resources languages BIBREF8. A common technique is to send queries made of mid-frequency $n$-grams to a search engine to gather bootstrap URLs, which initiate a crawl using a breadth-first strategy in order to gather meaningful information, such as documents or words BIBREF9, BIBREF5. Existing tools and studies, however, have requirements that are inadequate for the case of Swiss German. For example, GSW is not a language known to search engines BIBREF9, does not have specific TLDs BIBREF10, and lacks good language identification models. Also, GSW documents are too rare to use bootstrapping techniques BIBREF8. Finally, as GSW is scarce and mostly found in comments sections or as part of multilingual web pages (e.g. High German), we cannot afford to “privilege precision over recall” BIBREF11 by focusing on the main content of a page. As a consequence, our method is based on known techniques that are adapted to deal with those peculiarities. Furthermore, it was designed for having a human in the loop. Its iterative nature makes it possible to refine each step of the tool chain as our knowledge of GSW improves. <<</Related Work>>> <<<Proposed System>>> The two main components of our proposed system are shown in Figure FIGREF1: a seeder that gathers potentially interesting URLs using a Search Engine and a crawler that extracts GSW from web pages, linked together by a MongoDB database. The system is implemented in Python 3, with the full code available on GitHub. Due to the exploratory nature of the task, the tool chain is executed in an iterative manner, allowing us to control and potentially improve the process punctually. <<<Language Identification>>> Language identification (LID) is a central component of the pipeline, as it has a strong influence on the final result. In addition, readily available tools are not performing at a satisfying level. For these reasons we created a tailor-made LID system for this situation. LID has been extensively studied over the past decades BIBREF12 and has achieved impressive results on long monolingual documents in major languages such as English. However, the task becomes more challenging when the pool of training data is small and of high variability, and when the unit of identification is only a sentence. Free pretrained LIDs supporting GSW such as FastText BIBREF13 are trained on the Alemannic Wikipedia, which encompasses not only GSW, but also German dialects such as Badisch, Elsässisch, Schwäbisch and Vorarlbergisch. This makes the precision of the model insufficient for our purposes. The dataset used to build our Swiss German LID is based on the Leipzig text corpora BIBREF6, mostly focusing on the texts gathered from the Internet. In preliminary experiments, we have chosen eight language classes shown in Table TABREF4, which give precedence to languages closely related to Swiss German in their structure. In this Table, GSW_LIKE refers to a combination of dialects that are similar to Swiss German but for which we did not have sufficient resources to model classes on their own. A total of 535,000 sentences are considered for LID with an equal distribution over the eight classes. The 66,684 GSW sentences originate from the Leipzig web corpus 2017 and have been refined during preliminary experiments to exclude obvious non-GSW contents. We use 75% of the data for training, 10% for optimizing system parameters, and 15% for testing the final performance. Using a pretrained German BERT model BIBREF14 and fine-tuning it on our corpus, we obtain a high LID accuracy of 99.58%. GSW is most confused with German (0.04%) and GSW_LIKE (0.04%). We have also validated the LID system on SMS sentences BIBREF2, where it proves robust for sentences as short as five words. <<</Language Identification>>> <<<The Seeder>>> Query generation has already been extensively studied BIBREF15, BIBREF9. In the case of Swiss German, we tested three different approaches: (a) most frequent trigrams, (b) selection of 2 to 7 random words weighted by their frequency distribution and (c) human-generated queries. When comparing the corpora generated by 100 seeds of each type, we did not observe significant differences in terms of quantity or quality for the three seeding strategies. On a positive side, $50\%$ of the sentences were different from one seed strategy to the other, suggesting for an approach where strategies are mixed. However, we also observed that (a) tends to yield more similar queries over time and (c) is too time-consuming for practical use. Considering these observations, we privileged the following approach: Start with a list of sentences, either from a bootstrap dataset or from sentences from previous crawls using one single sentence per unique URL; Compute the frequency over the vocabulary, normalizing words to lower case and discarding those having non-alphabetic characters; Filter out words appearing only once or present in German or English vocabularies; Generate query seeds by sampling 3 words with a probability following their frequency distribution; Exclude seeds with more than two single-letter words or having a GSW probability below 95% (see Section SECREF3). Initial sentences come from the Leipzig web corpus 2017, filtered by means of the LID described in Section SECREF3 Each seed is submitted to startpage.com, a Google Search proxy augmented with privacy features. To ensure GSW is not auto-corrected to High German, each word is first surrounded by double quotes. The first 20 new URLs, i.e. URLs that were never seen before, are saved for further crawling. <<</The Seeder>>> <<<The Crawler>>> The crawler starts with a list of URLs and metadata taken either from a file or from the MongoDB instance, and are added to a task queue with a depth of 0. As illustrated in Figure FIGREF1, each task consists of a series of steps that will download the page content, extract well-formed GSW sentences and add links found on the page to the task queue. At different stages of this pipeline, a decider can intervene in order to stop the processing early. A crawl may also be limited to a given depth, usually set to 3. <<<Scrape>>> The raw HTML content is fetched and converted to UTF-8 using a mixture of requests and BeautifulSoup. Boilerplate removal such as navigation and tables uses jusText BIBREF16, but ignores stop words filtering as such a list is not available for GSW. The output is a UTF-8 text containing newlines. <<</Scrape>>> <<<Normalize>>> This stage tries to fix remaining encoding issues using ftfy BIBREF17 and to remove unicode emojis. Another important task is to normalize the unicode code points used for accents, spaces, dashes, quotes etc., and strip any invisible characters. To further improve the usability of the corpus and to simplify tokenization, we also try to enforce one single convention for spaces around quotes and colons, e.g. colons after closing quote, no space inside quotes. <<</Normalize>>> <<<Split>>> To split text into sentences, we implemented Moses' split-sentences.perl in Python and changed it in three main ways: existing newlines are preserved, colons and semi-colons are considered segmentation hints and sentences are not required to start with an uppercase. The latter is especially important as GSW is mostly found in comments where people tend to write fast and without proper casing/punctuation. The list of non-breaking prefixes used is a concatenation of the English and German prefixes found in Moses with few additions. <<</Split>>> <<<Filter>>> Non- or bad- sentences are identified based on a list of $20+$ rules that normal sentences should obey. Most rules are specified in the form of regular expression patterns and boundaries of acceptable occurrences, few compare the ratio of occurrence between two patterns. Examples of such rules in natural language are: “no more than one hashtag”, “no word with more than 30 characters”, “the ratio capitalized/lowercase words is below 1.5”. <<</Filter>>> <<<Language ID>>> Using the LID described in Section SECREF3, sentences with a GSW probability of less than 92% are discarded. This threshold is low on purpose in order to favor recall over precision. <<</Language ID>>> <<<Link filter>>> This component is used to exclude or transform outgoing links found in a page based on duplicates, URL composition, but also specific rules for big social media sites or known blogs. Examples are the exclusion of unrelated national TLDs (.af, .nl, ...) and known media extensions (.pdf, .jpeg, etc.), the stripping of session IDs in URL parameters, and the homogenization of subdomains for sites such as Twitter. Note that filtering is based only on the URL and therefore does not handle redirects or URLs pointing to the same page. This leads to extra work during the crawling, but keeps the whole system simple. <<</Link filter>>> <<<Decide>>> A decider has three main decisions to take. First, based on the metadata associated with an URL, should it be visited? In practice, we visit only new URLs, but the tool is designed in a way such that a recrawl is possible if the page is detected as highly dynamic. The second decision arises at the end of the processing, where the page can be either saved or blacklisted. To favor recall, we currently keep any URL with at least one GSW sentence. Finally, the decider can choose to visit the outgoing links or not. After some trials, we found that following links from pages with more than two new GSW sentences is a reasonable choice, as pages with less sentences are often quotes or false positives. <<</Decide>>> <<<Duplicates>>> During the crawl, the uniqueness of sentences and URLs considers only exact matches. However, when exporting the results, near-duplicate sentences are removed by first stripping any non-letter (including spaces) and making a lowercase comparison. We tried other near-duplicate approaches, but found that they also discarded meaningful writing variations. <<</Duplicates>>> <<</The Crawler>>> <<</Proposed System>>> <<<State of the Swiss German Web>>> Table TABREF14 shows the results of running the system three times using 100 seeds on a virtual machine with 5 CPU cores and no GPUs. As expected, the first iteration yields the most new sentences. Unfortunately, the number of newly discovered hosts and sentences decreases exponentially as the system runs, dropping to 20K sentences on the third iteration. This result emphasizes the fact that the amount of GSW on the web is very limited. The third iteration took also significantly longer, which highlights the difficulties of crawling the web. In this iteration, some URLs had as much as 12 thousand outgoing links that we had to visit before discarding. Another problem arises on web sites where query parameters are used in URLs to encode cookie information and on which duplicate hypotheses cannot be solved unless visiting the links. On each new search engine query, we go further down the list of results as the top ones may already be known. As such, the percentage of pertinent URLs retrieved (% good, see decider description in Section SECREF13) slowly decreases at each iteration. It is however still above 55% of the retrieved URLs on the third run, indicating a good quality of the seeds. <<</State of the Swiss German Web>>> <<<The SwissCrawl Text Corpus>>> Using the proposed system, we were able to gather more than half a million unique GSW sentences from around the web. The crawling took place between September and November 2019. The corpus is available for download in the form of a CSV file with four columns: text, url, crawl_proba, date, with crawl_proba being the GSW probability returned by the LID system (see Section SECREF3). <<<Contents>>> The corpus is composed of 562,524 sentences from 62K URLs among 3,472 domains. The top ten domains (see Table TABREF18) are forums and social media sites. They account for 46% of the whole corpus. In general, we consider a GSW probability of $\ge {99}\%$, to be indeed Swiss German with high confidence. This represents more than 89% of the corpus (500K) (see Figure FIGREF19). The sentence length varies between 25 and 998 characters with a mean of $92\pm 55$ and a median of 77 (see Figure FIGREF20), while the number of words lies between 4 and 222, with a mean of $16\pm 10$ and a median of 14. This highlights a common pattern in Swiss German writings: used mostly in informal contexts, sentences tend to be short and to include many symbols, such as emojis or repetitive punctuation. Very long sentences are usually lyrics that lack proper punctuation and thus could not be segmented properly. We however decided to keep them in the final corpus, as they could be useful in specific tasks and are easy to filter out otherwise. Besides the normalization described in SECREF13, no cleaning nor post-processing is applied to the sentences. This is a deliberate choice to avoid losing any information that could be pertinent for a given task or for further selection. As a result, the mean letter density is 80% and only 61% of sentences both start with an uppercase letter and end with a common punctuation mark (.!?). Finally, although we performed no human validation per se, we actively monitored the crawling process to spot problematic domains early. This allowed to blacklist some domains entirely, for example those serving embedded PDFs (impossible to parse properly) or written in very close German dialects. <<</Contents>>> <<<Discussion>>> Table TABREF23 shows some hand-picked examples. As most of our sources are social medias and forums, the writing style is often colloquial, interspersed with emojis and slang. This perfectly reflects the use of GSW in real life, where speakers switch to High German in formal conversations. In general, the quality of sentences is good, with few false positives mostly in High German or German dialects, rarer still in Dutch or Luxembourgian. The presence of specific structures in the sentences are often the cause of such mistakes, as they yield strong GSW cues. For example: High German with spelling mistakes or broken words; GSW named entities (“Ueli Aeschbacher”, “Züri”); The presence of many umlauts and/or short words; The repetition of letters, also used to convey emotions. The quality of the corpus highly depends on the text extraction step, which itself depends on the HTML structure of the pages. As there are no enforced standards and each website has its own needs, it is impossible to handle all edge cases. For example, some sites use hidden <span> elements to hold information, which become part of the extracted sentences. This is true for watson.ch and was dealt with using a specific rule, but there are still instances we did not detect. Splitting text into sentences is not a trivial task. Typical segmentation mistakes come from the use of ASCII emojis as punctuation marks (see text sample 3 in Table TABREF23), which are very common in forums. They are hard to detect due to the variability of each individual style. We defined duplicates as having the exact same letters. As such, some sentences may differ by one umlaut and some may be the truncation of others (e.g. excerpts with ellipsis). Finally, the corpus also contains poems and lyrics. Sometimes repetitive and especially hard to segment, they are still an important source of Swiss German online. In any case, they may be filtered out using cues in the sentence length and the URLs. <<</Discussion>>> <<</The SwissCrawl Text Corpus>>> <<<Swiss German Language Modeling>>> To demonstrate the effectiveness of the SwissCrawl corpus, we conducted a series of experiments for the NLP task of language modeling. The whole code is publicly available on GitHub. Using the GPT-2 BIBREF18 model in its base configuration (12 layers, 786 hidden states, 12 heads, 117M parameters), we trained three models using different training data: Leipzig unique sentences from the Leipzig GSW web; SwissCrawl sentences with a GSW probability $\ge {99}\%$ (see Section SECREF17); Both the union of 1) and 2). For each model, the vocabulary is generated using Byte Pair Encoding (BPE) BIBREF19 applied on the training set. The independent test sets are composed of 20K samples from each source. Table TABREF32 shows the perplexity of the models on each of the test sets. As expected, each model performs better on the test set they have been trained on. When applied to a different test set, both see an increase in perplexity. However, the Leipzig model seems to have more trouble generalizing: its perplexity nearly doubles on the SwissCrawl test set and raises by twenty on the combined test set. The best results are achieved by combining both corpora: while the perplexity on our corpus only marginally improves (from $49.5$ to $45.9$), the perplexity on the Leipzig corpus improves significantly (from $47.6$ to $30.5$, a 36% relative improvement). <<</Swiss German Language Modeling>>> <<<Conclusion>>> In this paper, we presented the tools developed to gather the most comprehensive collection of written Swiss German to our knowledge. It represents Swiss German in the way it is actually used in informal contexts, both with respect to the form (punctuation, capitalization, ...) and the content (slang, elliptic sentences, ...). We have demonstrated how this new resource can significantly improve Swiss German language modeling. We expect that other NLP tasks, such as LID and eventually machine translation, will also be able to profit from this new resource in the future. Our experiments support the reasoning that Swiss German is still scarce and very hard to find online. Still, the Internet is in constant evolution and we aim to keep increasing the corpus size by rerunning the tool chain at regular intervals. Another line of future development is the customization of the tools for big social media platforms such as Facebook and Twitter, where most of the content is only accessible through specific APIs. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nRelated Work\nProposed System\nLanguage Identification\nThe Seeder\nThe Crawler\nScrape\nNormalize\nSplit\nFilter\nLanguage ID\nLink filter\nDecide\nDuplicates\nState of the Swiss German Web\nThe SwissCrawl Text Corpus\nContents\nDiscussion\nSwiss German Language Modeling\nConclusion" ], "type": "outline" }
1909.05855
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Towards Scalable Multi-domain Conversational Agents: The Schema-Guided Dialogue Dataset <<<Abstract>>> Virtual assistants such as Google Assistant, Alexa and Siri provide a conversational interface to a large number of services and APIs spanning multiple domains. Such systems need to support an ever-increasing number of services with possibly overlapping functionality. Furthermore, some of these services have little to no training data available. Existing public datasets for task-oriented dialogue do not sufficiently capture these challenges since they cover few domains and assume a single static ontology per domain. In this work, we introduce the the Schema-Guided Dialogue (SGD) dataset, containing over 16k multi-domain conversations spanning 16 domains. Our dataset exceeds the existing task-oriented dialogue corpora in scale, while also highlighting the challenges associated with building large-scale virtual assistants. It provides a challenging testbed for a number of tasks including language understanding, slot filling, dialogue state tracking and response generation. Along the same lines, we present a schema-guided paradigm for task-oriented dialogue, in which predictions are made over a dynamic set of intents and slots, provided as input, using their natural language descriptions. This allows a single dialogue system to easily support a large number of services and facilitates simple integration of new services without requiring additional training data. Building upon the proposed paradigm, we release a model for dialogue state tracking capable of zero-shot generalization to new APIs, while remaining competitive in the regular setting. <<</Abstract>>> <<<Introduction>>> Virtual assistants help users accomplish tasks including but not limited to finding flights, booking restaurants and, more recently, navigating user interfaces, by providing a natural language interface to services and APIs on the web. The recent popularity of conversational interfaces and the advent of frameworks like Actions on Google and Alexa Skills, which allow developers to easily add support for new services, has resulted in a major increase in the number of application domains and individual services that assistants need to support, following the pattern of smartphone applications. Consequently, recent work has focused on scalable dialogue systems that can handle tasks across multiple application domains. Data-driven deep learning based approaches for multi-domain modeling have shown promise, both for end-to-end and modular systems involving dialogue state tracking and policy learning. This line of work has been facilitated by the release of multi-domain dialogue corpora such as MultiWOZ BIBREF0, M2M BIBREF1 and FRAMES BIBREF2. However, existing datasets for multi-domain task-oriented dialogue do not sufficiently capture a number of challenges that arise with scaling virtual assistants in production. These assistants need to support a large BIBREF3, constantly increasing number of services over a large number of domains. In comparison, existing public datasets cover few domains. Furthermore, they define a single static API per domain, whereas multiple services with overlapping functionality, but heterogeneous interfaces, exist in the real world. To highlight these challenges, we introduce the Schema-Guided Dialogue (SGD) dataset, which is, to the best of our knowledge, the largest public task-oriented dialogue corpus. It exceeds existing corpora in scale, with over 16000 dialogues in the training set spanning 26 services belonging to 16 domains (more details in Table TABREF2). Further, to adequately test the models' ability to generalize in zero-shot settings, the evaluation sets contain unseen services and domains. The dataset is designed to serve as an effective testbed for intent prediction, slot filling, state tracking and language generation, among other tasks in large-scale virtual assistants. We also propose the schema-guided paradigm for task-oriented dialogue, advocating building a single unified dialogue model for all services and APIs. Using a service's schema as input, the model would make predictions over this dynamic set of intents and slots present in the schema. This setting enables effective sharing of knowledge among all services, by relating the semantic information in the schemas, and allows the model to handle unseen services and APIs. Under the proposed paradigm, we present a novel architecture for multi-domain dialogue state tracking. By using large pretrained models like BERT BIBREF4, our model can generalize to unseen services and is robust to API changes, while achieving state-of-the-art results on the original and updated BIBREF5 MultiWOZ datasets. <<</Introduction>>> <<<Related Work>>> Task-oriented dialogue systems have constituted an active area of research for decades. The growth of this field has been consistently fueled by the development of new datasets. Initial datasets were limited to one domain, such as ATIS BIBREF6 for spoken language understanding for flights. The Dialogue State Tracking Challenges BIBREF7, BIBREF8, BIBREF9, BIBREF10 contributed to the creation of dialogue datasets with increasing complexity. Other notable related datasets include WOZ2.0 BIBREF11, FRAMES BIBREF2, M2M BIBREF1 and MultiWOZ BIBREF0. These datasets have utilized a variety of data collection techniques, falling within two broad categories: Wizard-of-Oz This setup BIBREF12 connects two crowd workers playing the roles of the user and the system. The user is provided a goal to satisfy, and the system accesses a database of entities, which it queries as per the user's preferences. WOZ2.0, FRAMES and MultiWOZ, among others, have utilized such methods. Machine-machine Interaction A related line of work explores simulation-based dialogue generation, where the user and system roles are simulated to generate a complete conversation flow, which can then be converted to natural language using crowd workers BIBREF1. Such a framework may be cost-effective and error-resistant since the underlying crowd worker task is simpler, and semantic annotations are obtained automatically. As virtual assistants incorporate diverse domains, recent work has focused on zero-shot modeling BIBREF13, BIBREF14, BIBREF15, domain adaptation and transfer learning techniques BIBREF16. Deep-learning based approaches have achieved state of the art performance on dialogue state tracking tasks. Popular approaches on small-scale datasets estimate the dialogue state as a distribution over all possible slot-values BIBREF17, BIBREF11 or individually score all slot-value combinations BIBREF18, BIBREF19. Such approaches are not practical for deployment in virtual assistants operating over real-world services having a very large and dynamic set of possible values. Addressing these concerns, approaches utilizing a dynamic vocabulary of slot values have been proposed BIBREF20, BIBREF21, BIBREF22. <<</Related Work>>> <<<The Schema-Guided Dialogue Dataset>>> An important goal of this work is to create a benchmark dataset highlighting the challenges associated with building large-scale virtual assistants. Table TABREF2 compares our dataset with other public datasets. Our Schema-Guided Dialogue (SGD) dataset exceeds other datasets in most of the metrics at scale. The especially larger number of domains, slots, and slot values, and the presence of multiple services per domain, are representative of these scale-related challenges. Furthermore, our evaluation sets contain many services, and consequently slots, which are not present in the training set, to help evaluate model performance on unseen services. The 17 domains (`Alarm' domain not included in training) present in our dataset are listed in Table TABREF5. We create synthetic implementations of a total of 34 services or APIs over these domains. Our simulator framework interacts with these services to generate dialogue outlines, which are a structured representation of dialogue semantics. We then used a crowd-sourcing procedure to paraphrase these outlines to natural language utterances. Our novel crowd-sourcing procedure preserves all annotations obtained from the simulator and does not require any extra annotations after dialogue collection. In this section, we describe these steps in detail and then present analyses of the collected dataset. <<<Services and APIs>>> We define the schema for a service as a combination of intents and slots with additional constraints, with an example in Figure FIGREF7. We implement all services using a SQL engine. For constructing the underlying tables, we sample a set of entities from Freebase and obtain the values for slots defined in the schema from the appropriate attribute in Freebase. We decided to use Freebase to sample real-world entities instead of synthetic ones since entity attributes are often correlated (e.g, a restaurant's name is indicative of the cuisine served). Some slots like event dates/times and available ticket counts, which are not present in Freebase, are synthetically sampled. To reflect the constraints present in real-world services and APIs, we impose a few other restrictions. First, our dataset does not expose the set of all possible slot values for some slots. Having such a list is impractical for slots like date or time because they have infinitely many possible values or for slots like movie or song names, for which new values are periodically added. Our dataset specifically identifies such slots as non-categorical and does not provide a set of all possible values for these. We also ensure that the evaluation sets have a considerable fraction of slot values not present in the training set to evaluate the models in the presence of new values. Some slots like gender, number of people, day of the week etc. are defined as categorical and we specify the set of all possible values taken by them. However, these values are not assumed to be consistent across services. E.g., different services may use (`male', `female'), (`M', `F') or (`he', `she') as possible values for gender slot. Second, real-world services can only be invoked with a limited number of slot combinations: e.g. restaurant reservation APIs do not let the user search for restaurants by date without specifying a location. However, existing datasets simplistically allow service calls with any given combination of slot values, thus giving rise to flows unsupported by actual services or APIs. As in Figure FIGREF7, the different service calls supported by a service are listed as intents. Each intent specifies a set of required slots and the system is not allowed to call this intent without specifying values for these required slots. Each intent also lists a set of optional slots with default values, which the user can override. <<</Services and APIs>>> <<<Dialogue Simulator Framework>>> The dialogue simulator interacts with the services to generate dialogue outlines. Figure FIGREF9 shows the overall architecture of our dialogue simulator framework. It consists of two agents playing the roles of the user and the system. Both agents interact with each other using a finite set of actions specified through dialogue acts over a probabilistic automaton designed to capture varied dialogue trajectories. These dialogue acts can take a slot or a slot-value pair as argument. Figure FIGREF13 shows all dialogue acts supported by the agents. At the start of a conversation, the user agent is seeded with a scenario, which is a sequence of intents to be fulfilled. We identified over 200 distinct scenarios for the training set, each comprising up to 5 intents. For multi-domain dialogues, we also identify combinations of slots whose values may be transferred when switching intents e.g. the 'address' slot value in a restaurant service could be transferred to the 'destination' slot for a taxi service invoked right after. The user agent then generates the dialogue acts to be output in the next turn. It may retrieve arguments i.e. slot values for some of the generated acts by accessing either the service schema or the raw SQL backend. The acts, combined with the respective parameters yield the corresponding user actions. Next, the system agent generates the next set of actions using a similar procedure. Unlike the user agent, however, the system agent has restricted access to the services (denoted by dashed line), e.g. it can only query the services by supplying values for all required slots for some service call. This helps us ensure that all generated flows are valid. After an intent is fulfilled through a series of user and system actions, the user agent queries the scenario to proceed to the next intent. Alternatively, the system may suggest related intents e.g. reserving a table after searching for a restaurant. The simulator also allows for multiple intents to be active during a given turn. While we skip many implementation details for brevity, it is worth noting that we do not include any domain-specific constraints in the simulation automaton. All domain-specific constraints are encoded in the schema and scenario, allowing us to conveniently use the simulator across a wide variety of domains and services. <<</Dialogue Simulator Framework>>> <<<Dialogue Paraphrasing>>> The dialogue paraphrasing framework converts the outlines generated by the simulator into a natural conversation. Figure FIGREF11a shows a snippet of the dialogue outline generated by the simulator, containing a sequence of user and system actions. The slot values present in these actions are in a canonical form because they obtained directly from the service. However, users may refer to these values in various different ways during the conversation, e.g., “los angeles" may be referred to as “LA" or “LAX". To introduce these natural variations in the slot values, we replace different slot values with a randomly selected variation (kept consistent across user turns in a dialogue) as shown in Figure FIGREF11b. Next we define a set of action templates for converting each action into a utterance. A few examples of such templates are shown below. These templates are used to convert each action into a natural language utterance, and the resulting utterances for the different actions in a turn are concatenated together as shown in Figure FIGREF11c. The dialogue transformed by these steps is then sent to the crowd workers. One crowd worker is tasked with paraphrasing all utterances of a dialogue to ensure naturalness and coherence. In our paraphrasing task, the crowd workers are instructed to exactly repeat the slot values in their paraphrases. This not only helps us verify the correctness of the paraphrases, but also lets us automatically obtain slot spans in the generated utterances by string search. This automatic slot span generation greatly reduced the annotation effort required, with little impact on dialogue naturalness, thus allowing us to collect more data with the same resources. Furthermore, it is important to note that this entire procedure preserves all other annotations obtained from the simulator including the dialogue state. Hence, no further annotation is needed. <<</Dialogue Paraphrasing>>> <<<Dataset Analysis>>> With over 16000 dialogues in the training set, the Schema-Guided Dialogue dataset is the largest publicly available annotated task-oriented dialogue dataset. The annotations include the active intents and dialogue states for each user utterance and the system actions for every system utterance. We have a few other annotations like the user actions but we withhold them from the public release. These annotations enable our dataset to be used as benchmark for tasks like intent detection, dialogue state tracking, imitation learning of dialogue policy, dialogue act to text generation etc. The schemas contain semantic information about the schema and the constituent intents and slots, in the form of natural language descriptions and other details (example in Figure FIGREF7). The single-domain dialogues in our dataset contain an average of 15.3 turns, whereas the multi-domain ones contain 23 turns on an average. These numbers are also reflected in Figure FIGREF13 showing the histogram of dialogue lengths on the training set. Table TABREF5 shows the distribution of dialogues across the different domains. We note that the dataset is largely balanced in terms of the domains and services covered, with the exception of Alarm domain, which is only present in the development set. Figure FIGREF13 shows the frequency of dialogue acts contained in the dataset. Note that all dialogue acts except INFORM, REQUEST and GOODBYE are specific to either the user or the system. <<</Dataset Analysis>>> <<</The Schema-Guided Dialogue Dataset>>> <<<The Schema-Guided Approach>>> Virtual assistants aim to support a large number of services available on the web. One possible approach is to define a large unified schema for the assistant, to which different service providers can integrate with. However, it is difficult to come up with a common schema covering all use cases. Having a common schema also complicates integration of tail services with limited developer support. We propose the schema-guided approach as an alternative to allow easy integration of new services and APIs. Under our proposed approach, each service provides a schema listing the supported slots and intents along with their natural language descriptions (Figure FIGREF7 shows an example). These descriptions are used to obtain a semantic representation of these schema elements. The assistant employs a single unified model containing no domain or service specific parameters to make predictions conditioned on these schema elements. For example, Figure FIGREF14 shows how dialogue state representation for the same dialogue can vary for two different services. Here, the departure and arrival cities are captured by analogously functioning but differently named slots in both schemas. Furthermore, values for the number_stops and direct_only slots highlight idiosyncrasies between services interpreting the same concept. There are many advantages to this approach. First, using a single model facilitates representation and transfer of common knowledge across related services. Second, since the model utilizes semantic representation of schema elements as input, it can interface with unseen services or APIs on which it has not been trained. Third, it is robust to changes like addition of new intents or slots to the service. <<</The Schema-Guided Approach>>> <<<Zero-Shot Dialogue State Tracking>>> Models in the schema-guided setting can condition on the pertinent services' schemas using descriptions of intents and slots. These models, however, also need access to representations for potentially unseen inputs from new services. Recent pretrained models like ELMo BIBREF23 and BERT BIBREF4 can help, since they are trained on very large corpora. Building upon these, we present our zero-shot schema-guided dialogue state tracking model. <<<Model>>> We use a single model, shared among all services and domains, to make these predictions. We first encode all the intents, slots and slot values for categorical slots present in the schema into an embedded representation. Since different schemas can have differing numbers of intents or slots, predictions are made over dynamic sets of schema elements by conditioning them on the corresponding schema embeddings. This is in contrast to existing models which make predictions over a static schema and are hence unable to share knowledge across domains and services. They are also not robust to changes in schema and require the model to be retrained with new annotated data upon addition of a new intent, slot, or in some cases, a slot value to a service. <<<Schema Embedding>>> This component obtains the embedded representations of intents, slots and categorical slot values in each service schema. Table TABREF18 shows the sequence pairs used for embedding each schema element. These sequence pairs are fed to a pretrained BERT encoder shown in Figure FIGREF20 and the output $\mathbf {u}_{\texttt {CLS}}$ is used as the schema embedding. For a given service with $I$ intents and $S$ slots, let $\lbrace \mathbf {i}_j\rbrace $, ${1 \le j \le I}$ and $\lbrace \mathbf {s}_j\rbrace $, ${1 \le j \le S}$ be the embeddings of all intents and slots respectively. As a special case, we let $\lbrace \mathbf {s}^n_j\rbrace $, ${1 \le j \le N \le S}$ denote the embeddings for the $N$ non-categorical slots in the service. Also, let $\lbrace \textbf {v}_j^k\rbrace $, $1 \le j \le V^k$ denote the embeddings for all possible values taken by the $k^{\text{th}}$ categorical slot, $1 \le k \le C$, with $C$ being the number of categorical slots and $N + C = S$. All these embeddings are collectively called schema embeddings. <<</Schema Embedding>>> <<<Utterance Encoding>>> Like BIBREF24, we use BERT to encode the user utterance and the preceding system utterance to obtain utterance pair embedding $\mathbf {u} = \mathbf {u}_{\texttt {CLS}}$ and token level representations $\mathbf {t}_1, \mathbf {t}_2 \cdots \mathbf {t}_M$, $M$ being the total number of tokens in the two utterances. The utterance and schema embeddings are used together to obtain model predictions using a set of projections (defined below). <<</Utterance Encoding>>> <<<Projection>>> Let $\mathbf {x}, \mathbf {y} \in \mathbb {R}^d$. For a task $K$, we define $\mathbf {l} = \mathcal {F}_K(\mathbf {x}, \mathbf {y}, p)$ as a projection transforming $\mathbf {x}$ and $\mathbf {y}$ into the vector $\mathbf {l} \in \mathbb {R}^p$ using Equations DISPLAY_FORM22-. Here, $\mathbf {h_1},\mathbf {h_2} \in \mathbb {R}^d$, $W^K_i$ and $b^K_i$ for $1 \le i \le 3$ are trainable parameters of suitable dimensions and $A$ is the activation function. We use $\texttt {gelu}$ BIBREF25 activation as in BERT. <<</Projection>>> <<<Active Intent>>> For a given service, the active intent denotes the intent requested by the user and currently being fulfilled by the system. It takes the value “NONE" if no intent for the service is currently being processed. Let $\mathbf {i}_0$ be a trainable parameter in $\mathbb {R}^d$ for the “NONE" intent. We define the intent network as below. The logits $l^{j}_{\text{int}}$ are normalized using softmax to yield a distribution over all $I$ intents and the “NONE" intent. During inference, we predict the highest probability intent as active. <<</Active Intent>>> <<<Requested Slots>>> These are the slots whose values are requested by the user in the current utterance. Projection $\mathcal {F}_{\text{req}}$ predicts logit $l^j_{\text{req}}$ for the $j^{\text{th}}$ slot. Obtained logits are normalized using sigmoid to get a score in $[0,1]$. During inference, all slots with $\text{score} > 0.5$ are predicted as requested. <<</Requested Slots>>> <<<User Goal>>> We define the user goal as the user constraints specified over the dialogue context till the current user utterance. Instead of predicting the entire user goal after each user utterance, we predict the difference between the user goal for the current turn and preceding user turn. During inference, the predicted user goal updates are accumulated to yield the predicted user goal. We predict the user goal updates in two stages. First, for each slot, a distribution of size 3 denoting the slot status and taking values none, dontcare and active is obtained by normalizing the logits obtained in equation DISPLAY_FORM28 using softmax. If the status of a slot is predicted to be none, its assigned value is assumed to be unchanged. If the prediction is dontcare, then the special dontcare value is assigned to it. Otherwise, a slot value is predicted and assigned to it in the second stage. In the second stage, equation is used to obtain a logit for each value taken by each categorical slot. Logits for a given categorical slot are normalized using softmax to get a distribution over all possible values. The value with the maximum mass is assigned to the slot. For each non-categorical slot, logits obtained using equations and are normalized using softmax to yield two distributions over all tokens. These two distributions respectively correspond to the start and end index of the span corresponding to the slot. The indices $p \le q$ maximizing $start[p] + end[q]$ are predicted to be the span boundary and the corresponding value is assigned to the slot. <<</User Goal>>> <<</Model>>> <<<Evaluation>>> We consider the following metrics for evaluation of the dialogue state tracking task: Active Intent Accuracy: The fraction of user turns for which the active intent has been correctly predicted. Requested Slot F1: The macro-averaged F1 score for requested slots over all eligible turns. Turns with no requested slots in ground truth and predictions are skipped. Average Goal Accuracy: For each turn, we predict a single value for each slot present in the dialogue state. The slots which have a non-empty assignment in the ground truth dialogue state are considered for accuracy. This is the average accuracy of predicting the value of a slot correctly. A fuzzy matching score is used for non-categorical slots to reward partial matches with the ground truth. Joint Goal Accuracy: This is the average accuracy of predicting all slot assignments for a turn correctly. For non-categorical slots a fuzzy matching score is used. <<<Performance on other datasets>>> We evaluate our model on public datasets WOZ2.0, MultiWOZ 2.0 and the updated MultiWOZ 2.1 BIBREF5. As results in Table TABREF37 show, our model performs competitively on all these datasets. Furthermore, we obtain state-of-the-art joint goal accuracies of 0.516 on MultiWOZ 2.0 and 0.489 on MultiWOZ 2.1 test sets respectively, exceeding the best-known results of 0.486 and 0.456 on these datasets as reported in BIBREF5. <<</Performance on other datasets>>> <<<Performance on SGD>>> The model performs well for Active Intent Accuracy and Requested Slots F1 across both seen and unseen services, shown in Table TABREF37. For joint goal and average goal accuracy, the model performs better on seen services compared to unseen ones (Figure FIGREF38). The main reason for this performance difference is a significantly higher OOV rate for slot values of unseen services. <<</Performance on SGD>>> <<<Performance on different domains (SGD)>>> The model performance also varies across various domains. The performance for the different domains is shown in (Table TABREF39) below. We observe that one of the factors affecting the performance across domains is still the presence of the service in the training data (seen services). Among the seen services, those in the `Events' domain have a very low OOV rate for slot values and the largest number of training examples which might be contributing to the high joint goal accuracy. For unseen services, we notice that the `Services' domain has a lower joint goal accuracy because of higher OOV rate and higher average turns per dialogue. For `Services' and `Flights' domains, the difference between joint goal accuracy and average accuracy indicates a possible skew in performance across slots where the performance on a few of the slots is much worse compared to all the other slots, thus considerably degrading the joint goal accuracy. The `RideSharing' domain also exhibits poor performance, since it possesses the largest number of the possible slot values across the dataset. We also notice that for categorical slots, with similar slot values (e.g. “Psychologist" and “Psychiatrist"), there is a very weak signal for the model to distinguish between the different classes, resulting in inferior performance. <<</Performance on different domains (SGD)>>> <<</Evaluation>>> <<</Zero-Shot Dialogue State Tracking>>> <<<Discussion>>> It is often argued that simulation-based data collection does not yield natural dialogues or sufficient coverage, when compared to other approaches such as Wizard-of-Oz. We argue that simulation-based collection is a better alternative for collecting datasets like this owing to the factors below. Fewer Annotation Errors: All annotations are automatically generated, so these errors are rare. In contrast, BIBREF5 reported annotation errors in 40% of turns in MultiWOZ 2.0 which utilized a Wizard-of-Oz setup. Simpler Task: The crowd worker task of paraphrasing a readable utterance for each turn is simple. The error-prone annotation task requiring skilled workers is not needed. Low Cost: The simplicity of the crowd worker task and lack of an annotation task greatly cut data collection costs. Better Coverage: A wide variety of dialogue flows can be collected and specific usecases can be targeted. <<</Discussion>>> <<<Conclusions>>> We presented the Schema-Guided Dialogue dataset to encourage scalable modeling approaches for virtual assistants. We also introduced the schema-guided paradigm for task-oriented dialogue that simplifies the integration of new services and APIs with large scale virtual assistants. Building upon this paradigm, we present a scalable zero-shot dialogue state tracking model achieving state-of-the-art results. <<</Conclusions>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nRelated Work\nThe Schema-Guided Dialogue Dataset\nServices and APIs\nDialogue Simulator Framework\nDialogue Paraphrasing\nDataset Analysis\nThe Schema-Guided Approach\nZero-Shot Dialogue State Tracking\nModel\nSchema Embedding\nUtterance Encoding\nProjection\nActive Intent\nRequested Slots\nUser Goal\nEvaluation\nPerformance on other datasets\nPerformance on SGD\nPerformance on different domains (SGD)\nDiscussion\nConclusions" ], "type": "outline" }
2004.04696
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> BLEURT: Learning Robust Metrics for Text Generation <<<Abstract>>> Text generation has made significant advances in the last few years. Yet, evaluation metrics have lagged behind, as the most popular choices (e.g., BLEU and ROUGE) may correlate poorly with human judgments. We propose BLEURT, a learned evaluation metric based on BERT that can model human judgments with a few thousand possibly biased training examples. A key aspect of our approach is a novel pre-training scheme that uses millions of synthetic examples to help the model generalize. BLEURT provides state-of-the-art results on the last three years of the WMT Metrics shared task and the WebNLG Competition dataset. In contrast to a vanilla BERT-based approach, it yields superior results even when the training data is scarce and out-of-distribution. <<</Abstract>>> <<<Introduction>>> In the last few years, research in natural text generation (NLG) has made significant progress, driven largely by the neural encoder-decoder paradigm BIBREF0, BIBREF1 which can tackle a wide array of tasks including translation BIBREF2, summarization BIBREF3, BIBREF4, structured-data-to-text generation BIBREF5, BIBREF6, BIBREF7 dialog BIBREF8, BIBREF9 and image captioning BIBREF10. However, progress is increasingly impeded by the shortcomings of existing metrics BIBREF7, BIBREF11, BIBREF12. Human evaluation is often the best indicator of the quality of a system. However, designing crowd sourcing experiments is an expensive and high-latency process, which does not easily fit in a daily model development pipeline. Therefore, NLG researchers commonly use automatic evaluation metrics, which provide an acceptable proxy for quality and are very cheap to compute. This paper investigates sentence-level, reference-based metrics, which describe the extent to which a candidate sentence is similar to a reference one. The exact definition of similarity may range from string overlap to logical entailment. The first generation of metrics relied on handcrafted rules that measure the surface similarity between the sentences. To illustrate, BLEU BIBREF13 and ROUGE BIBREF14, two popular metrics, rely on N-gram overlap. Because those metrics are only sensitive to lexical variation, they cannot appropriately reward semantic or syntactic variations of a given reference. Thus, they have been repeatedly shown to correlate poorly with human judgment, in particular when all the systems to compare have a similar level of accuracy BIBREF15, BIBREF16, BIBREF17. Increasingly, NLG researchers have addressed those problems by injecting learned components in their metrics. To illustrate, consider the WMT Metrics Shared Task, an annual benchmark in which translation metrics are compared on their ability to imitate human assessments. The last two years of the competition were largely dominated by neural net-based approaches, RUSE, YiSi and ESIM BIBREF18, BIBREF11. Current approaches largely fall into two categories. Fully learned metrics, such as BEER, RUSE, and ESIM are trained end-to-end, and they typically rely on handcrafted features and/or learned embeddings. Conversely, hybrid metrics, such as YiSi and BERTscore combine trained elements, e.g., contextual embeddings, with handwritten logic, e.g., as token alignment rules. The first category typically offers great expressivity: if a training set of human ratings data is available, the metrics may take full advantage of it and fit the ratings distribution tightly. Furthermore, learned metrics can be tuned to measure task-specific properties, such as fluency, faithfulness, grammatically, or style. On the other hand, hybrid metrics offer robustness. They may provide better results when there is little to no training data, and they do not rely on the assumption that training and test data are identically distributed. And indeed, the iid assumption is particularly problematic in NLG evaluation because of domain drifts, that have been the main target of the metrics literature, but also because of quality drifts: NLG systems tend to get better over time, and therefore a model trained on ratings data from 2015 may fail to distinguish top performing systems in 2019, especially for newer research tasks. An ideal learned metric would be able to both take full advantage of available ratings data for training, and be robust to distribution drifts, i.e., it should be able to extrapolate. Our insight is that it is possible to combine expressivity and robustness by pre-training a fully learned metric on large amounts of synthetic data, before fine-tuning it on human ratings. To this end, we introduce Bleurt, a text generation metric based on BERT BIBREF19. A key ingredient of Bleurt is a novel pre-training scheme, which uses random perturbations of Wikipedia sentences augmented with a diverse set of lexical and semantic-level supervision signals. To demonstrate our approach, we train Bleurt for English and evaluate it under different generalization regimes. We first verify that it provides state-of-the-art results on all recent years of the WMT Metrics Shared task (2017 to 2019, to-English language pairs). We then stress-test its ability to cope with quality drifts with a synthetic benchmark based on WMT 2017. Finally, we show that it can easily adapt to a different domain with three tasks from a data-to-text dataset, WebNLG 2017 BIBREF20. Ablations show that our synthetic pretraining scheme increases performance in the iid setting, and is critical to ensure robustness when the training data is scarce, skewed, or out-of-domain. <<</Introduction>>> <<<Preliminaries>>> Define $= (x_1,..,x_{r})$ to be the reference sentence of length $r$ where each $x_i$ is a token and let $\tilde{} = (\tilde{x}_1,..,\tilde{x}_{p})$ be a prediction sentence of length $p$. Let $\lbrace (_i, \tilde{}_i, y_i)\rbrace _{n=1}^{N}$ be a training dataset of size $N$ where $y_i \in [0, 1]$ is the human rating that indicates how good $\tilde{}_i$ is with respect to $_i$. Given the training data, our goal is to learn a function $: (, \tilde{}) \rightarrow y$ that predicts the human rating. <<</Preliminaries>>> <<<Fine-Tuning BERT for Quality Evaluation>>> Given the small amounts of rating data available, it is natural to leverage unsupervised representations for this task. In our model, we use BERT (Bidirectional Encoder Representations from Transformers) BIBREF19, which is an unsupervised technique that learns contextualized representations of sequences of text. Given $$ and $\tilde{}$, BERT is a Transformer BIBREF21 that returns a sequence of contextualized vectors: where $_{\mathrm {[CLS]}}$ is the representation for the special $\mathrm {[CLS]}$ token. As described by devlin2018bert, we add a linear layer on top of the $\mathrm {[CLS]}$ vector to predict the rating: where $$ and $$ are the weight matrix and bias vector respectively. Both the above linear layer as well as the BERT parameters are trained (i.e. fine-tuned) on the supervised data which typically numbers in a few thousand examples. We use the regression loss $\ell _{\textrm {supervised}} = \frac{1}{N} \sum _{n=1}^{N} \Vert y_i - \hat{y} \Vert ^2 $. Although this approach is quite straightforward, we will show in Section SECREF5 that it gives state-of-the-art results on WMT Metrics Shared Task 17-19, which makes it a high-performing evaluation metric. However, fine-tuning BERT requires a sizable amount of iid data, which is less than ideal for a metric that should generalize to a variety of tasks and model drift. <<</Fine-Tuning BERT for Quality Evaluation>>> <<<Pre-Training on Synthetic Data>>> The key aspect of our approach is a pre-training technique that we use to “warm up” BERT before fine-tuning on rating data. We generate a large number of of synthetic reference-candidate pairs $(, \tilde{})$, and we train BERT on several lexical- and semantic-level supervision signals with a multitask loss. As our experiments will show, Bleurt generalizes much better after this phase, especially with incomplete training data. Any pre-training approach requires a dataset and a set of pre-training tasks. Ideally, the setup should resemble the final NLG evaluation task, i.e., the sentence pairs should be distributed similarly and the pre-training signals should correlate with human ratings. Unfortunately, we cannot have access to the NLG models that we will evaluate in the future. Therefore, we optimized our scheme for generality, with three requirements. (1) The set of reference sentences should be large and diverse, so that Bleurt can cope with a wide range of NLG domains and tasks. (2) The sentence pairs should contain a wide variety of lexical, syntactic, and semantic dissimilarities. The aim here is to anticipate all variations that an NLG system may produce, e.g., phrase substitution, paraphrases, noise, or omissions. (3) The pre-training objectives should effectively capture those phenomena, so that Bleurt can learn to identify them. The following sections present our approach. <<<Generating Sentence Pairs>>> One way to expose Bleurt to a wide variety of sentence differences is to use existing sentence pairs datasets BIBREF22, BIBREF23, BIBREF24. These sets are a rich source of related sentences, but they may fail to capture the errors and alterations that NLG systems produce (e.g., omissions, repetitions, nonsensical substitutions). We opted for an automatic approach instead, that can be scaled arbitrarily and at little cost: we generate synthetic sentence pairs $(, \tilde{})$ by randomly perturbing 1.8 million segments $$ from Wikipedia. We use three techniques: mask-filling with BERT, backtranslation, and randomly dropping out words. We obtain about 6.5 million perturbations $\tilde{}$. Let us describe those techniques. <<<Mask-filling with BERT:>>> BERT's initial training task is to fill gaps (i.e., masked tokens) in tokenized sentences. We leverage this functionality by inserting masks at random positions in the Wikipedia sentences, and fill them with the language model. Thus, we introduce lexical alterations while maintaining the fluency of the sentence. We use two masking strategies—we either introduce the masks at random positions in the sentences, or we create contiguous sequences of masked tokens. More details are provided in the Appendix. <<</Mask-filling with BERT:>>> <<<Backtranslation:>>> We generate paraphrases and perturbations with backtranslation, that is, round trips from English to another language and then back to English with a translation model BIBREF25, BIBREF26, BIBREF27. Our primary aim is to create variants of the reference sentence that preserves semantics. Additionally, we use the mispredictions of the backtranslation models as a source of realistic alterations. <<</Backtranslation:>>> <<<Dropping words:>>> We found it useful in our experiments to randomly drop words from the synthetic examples above to create other examples. This method prepares Bleurt for “pathological” behaviors or NLG systems, e.g., void predictions, or sentence truncation. <<</Dropping words:>>> <<</Generating Sentence Pairs>>> <<<Pre-Training Signals>>> The next step is to augment each sentence pair $(, \tilde{})$ with a set of pre-training signals $\lbrace {\tau }_k\rbrace $, where ${\tau }_k$ is the target vector of pre-training task $k$. Good pre-training signals should capture a wide variety of lexical and semantic differences. They should also be cheap to obtain, so that the approach can scale to large amounts of synthetic data. The following section presents our 9 pre-training tasks, summarized in Table TABREF3. Additional implementation details are in the Appendix. <<<Automatic Metrics:>>> We create three signals ${\tau _{\text{BLEU}}}$, ${\tau _{\text{ROUGE}}}$, and ${\tau _{\text{BERTscore}}}$ with sentence BLEU BIBREF13, ROUGE BIBREF14, and BERTscore BIBREF28 respectively (we use precision, recall and F-score for the latter two). <<</Automatic Metrics:>>> <<<Backtranslation Likelihood:>>> The idea behind this signal is to leverage existing translation models to measure semantic equivalence. Given a pair $(, \tilde{})$, this training signal measures the probability that $\tilde{}$ is a backtranslation of $$, $P(\tilde{} | )$, normalized by the length of $\tilde{}$. Let $P_{\texttt {en}\rightarrow \texttt {fr}}(_{\texttt {fr}} | )$ be a translation model that assigns probabilities to French sentences $_{\texttt {fr}}$ conditioned on English sentences $$ and let $P_{\texttt {fr}\rightarrow \texttt {en}}(| _{\texttt {fr}})$ be a translation model that assigns probabilities to English sentences given french sentences. If $|\tilde{}|$ is the number of tokens in $\tilde{}$, we define our score as $ {\tau }_{\text{en-fr}, \tilde{} \mid } = \frac{\log P(\tilde{} | )}{|\tilde{}|}$, with: Because computing the summation over all possible French sentences is intractable, we approximate the sum using $_{\texttt {fr}}^\ast = P_{\texttt {en}\rightarrow \texttt {fr}} (_{\texttt {fr}} | )$ and we assume that $P_{\texttt {en}\rightarrow \texttt {fr}}(_{\texttt {fr}}^\ast | ) \approx 1$: We can trivially reverse the procedure to compute $P(| \tilde{})$, thus we create 4 pre-training signals ${\tau }_{\text{en-fr}, \mid \tilde{}}$, ${\tau }_{\text{en-fr}, \tilde{} \mid }$, ${\tau }_{\text{en-de}, \mid \tilde{}}$, ${\tau }_{\text{en-de}, \tilde{} \mid }$ with two pairs of languages ($\texttt {en}\leftrightarrow \texttt {de}$ and $\texttt {en}\leftrightarrow \texttt {fr}$) in both directions. <<</Backtranslation Likelihood:>>> <<<Textual Entailment:>>> The signal ${\tau }_\text{entail}$ expresses whether $$ entails or contradicts $\tilde{}$ using a classifier. We report the probability of three labels: Entail, Contradict, and Neutral, using BERT fine-tuned on an entailment dataset, MNLI BIBREF19, BIBREF23. <<</Textual Entailment:>>> <<<Backtranslation flag:>>> The signal ${\tau }_\text{backtran\_flag}$ is a Boolean that indicates whether the perturbation was generated with backtranslation or with mask-filling. <<</Backtranslation flag:>>> <<</Pre-Training Signals>>> <<<Modeling>>> For each pre-training task, our model uses either a regression or a classification loss. We then aggregate the task-level losses with a weighted sum. Let ${\tau }_k$ describe the target vector for each task, e.g., the probabilities for the classes Entail, Contradict, Neutral, or the precision, recall, and F-score for ROUGE. If ${\tau }_k$ is a regression task, then the loss used is the $\ell _2$ loss i.e. $\ell _k = \Vert {\tau }_k - \hat{{\tau }}_k \Vert _2^2 / |{\tau }_k|$ where $|{\tau }_k|$ is the dimension of ${\tau }_k$ and $\hat{{\tau }}_k$ is computed by using a task-specific linear layer on top of the $\textrm {[CLS]}$ embedding: $\hat{{\tau }}_k = _{\tau _k} \tilde{}_{\textrm {[CLS]}} + _{\tau _k}$. If ${\tau }_k$ is a classification task, we use a separate linear layer to predict a logit for each class $c$: $\hat{{\tau }}_{kc} = _{\tau _{kc}} \tilde{}_{\textrm {[CLS]}} + _{\tau _{kc}}$, and we use the multiclass cross-entropy loss. We define our aggregate pre-training loss function as follows: pre-training = 1M m=1M k=1K k k(km, km) where ${\tau }_k^m$ is the target vector for example $m$, $M$ is number of synthetic examples, and $\gamma _k$ are hyperparameter weights obtained with grid search (more details in the Appendix). <<</Modeling>>> <<</Pre-Training on Synthetic Data>>> <<<Experiments>>> In this section, we report our experimental results for two tasks, translation and data-to-text. First, we benchmark Bleurt against existing text generation metrics on the last 3 years of the WMT Metrics Shared Task BIBREF29. We then evaluate its robustness to quality drifts with a series of synthetic datasets based on WMT17. We test Bleurt's ability to adapt to different tasks with the WebNLG 2017 Challenge Dataset BIBREF20. Finally, we measure the contribution of each pre-training task with ablation experiments. <<<Our Models:>>> Unless specified otherwise, all Bleurt models are trained in three steps: regular BERT pre-training BIBREF19, pre-training on synthetic data (as explained in Section SECREF4), and fine-tuning on task-specific ratings (translation and/or data-to-text). We experiment with two versions of Bleurt, BLEURT and BLEURTbase, respectively based on BERT-Large (24 layers, 1024 hidden units, 16 heads) and BERT-Base (12 layers, 768 hidden units, 12 heads) BIBREF19, both uncased. We use batch size 32, learning rate 1e-5, and 800,000 steps for pre-training and 40,000 steps for fine-tuning. We provide the full detail of our training setup in the Appendix. <<</Our Models:>>> <<<WMT Metrics Shared Task>>> <<<Datasets and Metrics:>>> We use years 2017 to 2019 of the WMT Metrics Shared Task, to-English language pairs. For each year, we used the official WMT test set, which include several thousand pairs of sentences with human ratings from the news domain. The training sets contain 5,360, 9,492, and 147,691 records for each year. The test sets for years 2018 and 2019 are noisier, as reported by the organizers and shown by the overall lower correlations. We evaluate the agreement between the automatic metrics and the human ratings. For each year, we report two metrics: Kendall's Tau $\tau $ (for consistency across experiments), and the official WMT metric for that year (for completeness). The official WMT metric is either Pearson's correlation or a robust variant of Kendall's Tau called DARR, described in the Appendix. All the numbers come from our own implementation of the benchmark. Our results are globally consistent with the official results but we report small differences in 2018 and 2019, marked in the tables. <<</Datasets and Metrics:>>> <<<Models:>>> We experiment with four versions of Bleurt: BLEURT, BLEURTbase, BLEURT -pre and BLEURTbase -pre. The first two models are based on BERT-large and BERT-base. In the latter two versions, we skip the pre-training phase and fine-tune directly on the WMT ratings. For each year of the WMT shared task, we use the test set from the previous years for training and validation. We describe our setup in further detail in the Appendix. We compare Bleurt to participant data from the shared task and automatic metrics that we ran ourselves. In the former case, we use the the best-performing contestants for each year, that is, chrF++, BEER, Meteor++, RUSE, Yisi1, ESIM and Yisi1-SRL BIBREF30. All the contestants use the same WMT training data, in addition to existing sentence or token embeddings. In the latter case, we use Moses sentenceBLEU, BERTscore BIBREF28, and MoverScore BIBREF31. For BERTscore, we use BERT-large uncased for fairness, and roBERTa (the recommended version) for completeness BIBREF32. We run MoverScore on WMT 2017 using the scripts published by the authors. <<</Models:>>> <<<Results:>>> Tables TABREF14, TABREF15, TABREF16 show the results. For years 2017 and 2018, a Bleurt-based metric dominates the benchmark for each language pair (Tables TABREF14 and TABREF15). BLEURT and BLEURTbase are also competitive for year 2019: they yield the best results for every language pair on Kendall's Tau, and they come first for 4 out of 7 pairs on DARR. As expected, BLEURT dominates BLEURTbase in the majority of cases. Pre-training consistently improves the results of BLEURT and BLEURTbase. We observe the largest effect on year 2017, where it adds up to 7.4 Kendall Tau points for BLEURTbase (zh-en). The effect is milder on years 2018 and 2019, up to 2.1 points (tr-en, 2018). We explain the difference by the fact that the training data used for 2017 is smaller than the datasets used for the following years, so pre-training is likelier to help. In general pre-training yields higher returns for BERT-base than for BERT-large—in fact, BLEURTbase with pre-training is often better than BLEURT without. Takeaways: Pre-training delivers consistent improvements, especially for BERT-base. Bleurt yields state-of-the art performance for all years of the WMT Metrics Shared task. <<</Results:>>> <<</WMT Metrics Shared Task>>> <<<Robustness to Quality Drift>>> We assess our claim that pre-training makes Bleurt robust to quality drifts, by constructing a series of tasks for which it is increasingly pressured to extrapolate. All the experiments that follow are based on the WMT Metrics Shared Task 2017, because the ratings for this edition are particularly reliable. <<<Methodology:>>> We create increasingly challenging datasets by sub-sampling the records from the WMT Metrics shared task, keeping low-rated translations for training and high-rated translations for test. The key parameter is the skew factor $\alpha $, that measures how much the training data is left-skewed and the test data is right-skewed. Figure FIGREF24 demonstrates the ratings distribution that we used in our experiments. The training data shrinks as $\alpha $ increases: in the most extreme case ($\alpha =3.0$), we use only 11.9% of the original 5,344 training records. We give the full detail of our sampling methodology in the Appendix. We use BLEURT with and without pre-training and we compare to Moses sentBLEU and BERTscore. We use BERT-large uncased for both BLEURT and BERTscore. <<</Methodology:>>> <<<Takeaways:>>> Pre-training makes BLEURT significantly more robust to quality drifts. <<</Takeaways:>>> <<</Robustness to Quality Drift>>> <<<WebNLG Experiments>>> In this section, we evaluate Bleurt's performance on three tasks from a data-to-text dataset, the WebNLG Challenge 2017 BIBREF33. The aim is to assess Bleurt's capacity to adapt to new tasks with limited training data. <<<Dataset and Evaluation Tasks:>>> The WebNLG challenge benchmarks systems that produce natural language description of entities (e.g., buildings, cities, artists) from sets of 1 to 5 RDF triples. The organizers released the human assessments for 9 systems over 223 inputs, that is, 4,677 sentence pairs in total (we removed null values). Each input comes with 1 to 3 reference descriptions. The submissions are evaluated on 3 aspects: semantics, grammar, and fluency. We treat each type of rating as a separate modeling task. The data has no natural split between train and test, therefore we experiment with several schemes. We allocate 0% to about 50% of the data to training, and we split on both the evaluated systems or the RDF inputs in order to test different generalization regimes. <<</Dataset and Evaluation Tasks:>>> <<<Systems and Baselines:>>> BLEURT -pre -wmt, is a public BERT-large uncased checkpoint directly trained on the WebNLG ratings. BLEURT -wmtwas first pre-trained on synthetic data, then fine-tuned on WebNLG data. BLEURT was trained in three steps: first on synthetic data, then on WMT data (16-18), and finally on WebNLG data. When a record comes with several references, we run BLEURT on each reference and report the highest value BIBREF28. We report four baselines: BLEU, TER, Meteor, and BERTscore. The first three were computed by the WebNLG competition organizers. We ran the latter one ourselves, using BERT-large uncased for a fair comparison. <<</Systems and Baselines:>>> <<</WebNLG Experiments>>> <<<Ablation Experiments>>> Figure FIGREF36 presents our ablation experiments on WMT 2017, which highlight the relative importance of each pre-training task. On the left side, we compare Bleurt pre-trained on a single task to Bleurt without pre-training. On the right side, we compare full Bleurt to Bleurt pre-trained on all tasks except one. Pre-training on BERTscore, entailment, and the backtranslation scores yield improvements (symmetrically, ablating them degrades Bleurt). Oppositely, BLEU and ROUGE have a negative impact. We conclude that pre-training on high quality signals helps BLEURT, but that metrics that correlate less well with human judgment may in fact harm the model. <<</Ablation Experiments>>> <<</Experiments>>> <<<Related Work>>> The WMT shared metrics competition BIBREF34, BIBREF18, BIBREF11 has inspired the creation of many learned metrics, some of which use regression or deep learning BIBREF35, BIBREF36, BIBREF37, BIBREF38, BIBREF30. Other metrics have been introduced, such as the recent MoverScore BIBREF31 which combines contextual embeddings and Earth Mover's Distance. We provide a head-to-head comparison with the best performing of those in our experiments. Other approaches do not attempt to estimate quality directly, but use information extraction or question answering as a proxy BIBREF7, BIBREF39, BIBREF40. Those are complementary to our work. There has been recent work that uses BERT for evaluation. BERTScore BIBREF28 proposes replacing the hard n-gram overlap of BLEU with a soft-overlap using BERT embeddings. We use it in all our experiments. Bertr BIBREF30 and YiSi BIBREF30 also make use of BERT embeddings to compute a similarity score. Sum-QE BIBREF41 fine-tunes BERT for quality estimation as we describe in Section SECREF3. Our focus is different—we train metrics that are not only state-of-the-art in conventional iid experimental setups, but also robust in the presence of scarce and out-of-distribution training data. To our knowledge no existing work has explored pre-training and extrapolation in the context of NLG. Noisy pre-training has been proposed before for other tasks such as paraphrasing BIBREF42, BIBREF43 but generally not with synthetic data. Generating synthetic data via paraphrases and perturbations has been commonly used for generating adversarial examples BIBREF44, BIBREF45, BIBREF46, BIBREF47, an orthogonal line of research. <<</Related Work>>> <<<Conclusion>>> We presented Bleurt, a reference-based text generation metric for English. Because the metric is trained end-to-end, Bleurt can model human assessment with superior accuracy. Furthermore, pre-training makes the metrics robust particularly robust to both domain and quality drifts. Future research directions include multilingual NLG evaluation, and hybrid methods involving both humans and classifiers. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nPreliminaries\nFine-Tuning BERT for Quality Evaluation\nPre-Training on Synthetic Data\nGenerating Sentence Pairs\nMask-filling with BERT:\nBacktranslation:\nDropping words:\nPre-Training Signals\nAutomatic Metrics:\nBacktranslation Likelihood:\nTextual Entailment:\nBacktranslation flag:\nModeling\nExperiments\nOur Models:\nWMT Metrics Shared Task\nDatasets and Metrics:\nModels:\nResults:\nRobustness to Quality Drift\nMethodology:\nTakeaways:\nWebNLG Experiments\nDataset and Evaluation Tasks:\nSystems and Baselines:\nAblation Experiments\nRelated Work\nConclusion" ], "type": "outline" }
1911.05960
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Contextual Recurrent Units for Cloze-style Reading Comprehension <<<Abstract>>> Recurrent Neural Networks (RNN) are known as powerful models for handling sequential data, and especially widely utilized in various natural language processing tasks. In this paper, we propose Contextual Recurrent Units (CRU) for enhancing local contextual representations in neural networks. The proposed CRU injects convolutional neural networks (CNN) into the recurrent units to enhance the ability to model the local context and reducing word ambiguities even in bi-directional RNNs. We tested our CRU model on sentence-level and document-level modeling NLP tasks: sentiment classification and reading comprehension. Experimental results show that the proposed CRU model could give significant improvements over traditional CNN or RNN models, including bidirectional conditions, as well as various state-of-the-art systems on both tasks, showing its promising future of extensibility to other NLP tasks as well. <<</Abstract>>> <<<Introduction>>> Neural network based approaches have become popular frameworks in many machine learning research fields, showing its advantages over traditional methods. In NLP tasks, two types of neural networks are widely used: Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN). RNNs are powerful models in various NLP tasks, such as machine translation BIBREF0, sentiment classification BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, reading comprehension BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, etc. The recurrent neural networks can flexibly model different lengths of sequences into a fixed representation. There are two main implementations of RNN: Long Short-Term Memory (LSTM) BIBREF12 and Gated Recurrent Unit (GRU) BIBREF0, which solve the gradient vanishing problems in vanilla RNNs. Compared to RNN, the CNN model also shows competitive performances in some tasks, such as text classification BIBREF13, etc. However, different from RNN, CNN sets a pre-defined convolutional kernel to “summarize” a fixed window of adjacent elements into blended representations, showing its ability of modeling local context. As both global and local information is important in most of NLP tasks BIBREF14, in this paper, we propose a novel recurrent unit, called Contextual Recurrent Unit (CRU). The proposed CRU model adopts advantages of RNN and CNN, where CNN is good at modeling local context, and RNN is superior in capturing long-term dependencies. We propose three variants of our CRU model: shallow fusion, deep fusion and deep-enhanced fusion. To verify the effectiveness of our CRU model, we utilize it into two different NLP tasks: sentiment classification and reading comprehension, where the former is sentence-level modeling, and the latter is document-level modeling. In the sentiment classification task, we build a standard neural network and replace the recurrent unit by our CRU model. To further demonstrate the effectiveness of our model, we also tested our CRU in reading comprehension tasks with a strengthened baseline system originated from Attention-over-Attention Reader (AoA Reader) BIBREF10. Experimental results on public datasets show that our CRU model could substantially outperform various systems by a large margin, and set up new state-of-the-art performances on related datasets. The main contributions of our work are listed as follows. [leftmargin=*] We propose a novel neural recurrent unit called Contextual Recurrent Unit (CRU), which effectively incorporate the advantage of CNN and RNN. Different from previous works, our CRU model shows its excellent flexibility as GRU and provides better performance. The CRU model is applied to both sentence-level and document-level modeling tasks and gives state-of-the-art performances. The CRU could also give substantial improvements in cloze-style reading comprehension task when the baseline system is strengthened by incorporating additional features which will enrich the representations of unknown words and make the texts more readable to the machine. <<</Introduction>>> <<<Related Works>>> Gated recurrent unit (GRU) has been proposed in the scenario of neural machine translations BIBREF0. It has been shown that the GRU has comparable performance in some tasks compared to the LSTM. Another advantage of GRU is that it has a simpler neural architecture than LSTM, showing a much efficient computation. However, convolutional neural network (CNN) is not as popular as RNNs in NLP tasks, as the texts are formed temporally. But in some studies, CNN shows competitive performance to the RNN models, such as text classification BIBREF13. Various efforts have been made on combining CNN and RNN. BIBREF3 proposed an architecture that combines CNN and GRU model with pre-trained word embeddings by word2vec. BIBREF5 proposed to combine asymmetric convolution neural network with the bidirectional LSTM network. BIBREF4 presented Dependency Sensitive CNN, which hierarchically construct text by using LSTMs and extracting features with convolution operations subsequently. BIBREF15 propose to make use of dependency relations information in the shortest dependency path (SDP) by combining CNN and two-channel LSTM units. BIBREF16 build a neural network for dialogue topic tracking where the CNN used to account for semantics at individual utterance and RNN for modeling conversational contexts along multiple turns in history. The difference between our CRU model and previous works can be concluded as follows. [leftmargin=*] Our CRU model could adaptively control the amount of information that flows into different gates, which was not studied in previous works. Also, the CRU does not introduce a pooling operation, as opposed to other works, such as CNN-GRU BIBREF3. Our motivation is to provide flexibility as the original GRU, while the pooling operation breaks this law (the output length is changed), and it is unable to do exact word-level attention over the output. However, in our CRU model, the output length is the same as the input's and can be easily applied to various tasks where the GRU used to. We also observed that by only using CNN to conclude contextual information is not strong enough. So we incorporate the original word embeddings to form a "word + context" representation for enhancement. <<</Related Works>>> <<<Our approach>>> In this section, we will give a detailed introduction to our CRU model. Firstly, we will give a brief introduction to GRU BIBREF0 as preliminaries, and then three variants of our CRU model will be illustrated. <<<Gated Recurrent Unit>>> Gated Recurrent Unit (GRU) is a type of recurrent unit that models sequential data BIBREF0, which is similar to LSTM but is much simpler and computationally effective than the latter one. We will briefly introduce the formulation of GRU. Given a sequence $x = \lbrace x_1, x_2, ..., x_n\rbrace $, GRU will process the data in the following ways. For simplicity, the bias term is omitted in the following equations. where $z_t$ is the update gate, $r_t$ is the reset gate, and non-linear function $\sigma $ is often chosen as $sigmoid$ function. In many NLP tasks, we often use a bi-directional GRU, which takes both forward and backward information into account. <<</Gated Recurrent Unit>>> <<<Contextual Recurrent Unit>>> By only modeling word-level representation may have drawbacks in representing the word that has different meanings when the context varies. Here is an example that shows this problem. There are many fan mails in the mailbox. There are many fan makers in the factory. As we can see that, though two sentences share the same beginning before the word fan, the meanings of the word fan itself are totally different when we meet the following word mails and makers. The first fan means “a person that has strong interests in a person or thing", and the second one means “a machine with rotating blades for ventilation". However, the embedding of word fan does not discriminate according to the context. Also, as two sentences have the same beginning, when we apply a recurrent operation (such as GRU) till the word fan, the output of GRU does not change, though they have entirely different meanings when we see the following words. To enrich the word representation with local contextual information and diminishing the word ambiguities, we propose a model as an extension to the GRU, called Contextual Recurrent Unit (CRU). In this model, we take full advantage of the convolutional neural network and recurrent neural network, where the former is good at modeling local information, and the latter is capable of capturing long-term dependencies. Moreover, in the experiment part, we will also show that our bidirectional CRU could also significantly outperform the bidirectional GRU model. In this paper, we propose three different types of CRU models: shallow fusion, deep fusion and deep-enhanced fusion, from the most fundamental one to the most expressive one. We will describe these models in detail in the following sections. <<<Shallow Fusion>>> The most simple one is to directly apply a CNN layer after the embedding layer to obtain blended contextual representations. Then a GRU layer is applied afterward. We call this model as shallow fusion, because the CNN and RNN are applied linearly without changing inner architectures of both. Formally, when given a sequential data $x = \lbrace x_1, x_2, ..., x_n\rbrace $, a shallow fusion of CRU can be illustrated as follows. We first transform word $x_t$ into word embeddings through an embedding matrix $W_e$. Then a convolutional operation $\phi $ is applied to the context of $e_t$, denoted as $\widetilde{e_t}$, to obtain contextual representations. Finally, the contextual representation $c_t$ is fed into GRU units. Following BIBREF13, we apply embedding-wise convolution operation, which is commonly used in natural language processing tasks. Let $e_{i:j} \in \mathbb {R}^{\mathcal {\\}j*d}$ denote the concatenation of $j-i+1$ consecutive $d$-dimensional word embeddings. The embedding-wise convolution is to apply a convolution filter w $\in \mathbb {R}^{\mathcal {\\}k*d}$ to a window of $k$ word embeddings to generate a new feature, i.e., summarizing a local context of $k$ words. This can be formulated as where $f$ is a non-linear function and $b$ is the bias. By applying the convolutional filter to all possible windows in the sentence, a feature map $c$ will be generated. In this paper, we apply a same-length convolution (length of the sentence does not change), i.e. $c \in \mathbb {R}^{\mathcal {\\}n*1}$. Then we apply $d$ filters with the same window size to obtain multiple feature maps. So the final output of CNN has the shape of $C \in \mathbb {R}^{\mathcal {\\}n*d}$, which is exactly the same size as $n$ word embeddings, which enables us to do exact word-level attention in various tasks. <<</Shallow Fusion>>> <<<Deep Fusion>>> The contextual information that flows into the update gate and reset gate of GRU is identical in shallow fusion. In order to let the model adaptively control the amount of information that flows into these gates, we can embed CNN into GRU in a deep manner. We can rewrite the Equation 1 to 3 of GRU as follows. where $\phi _z, \phi _r, \phi $ are three different CNN layers, i.e., the weights are not shared. When the weights share across these CNNs, the deep fusion will be degraded to shallow fusion. <<</Deep Fusion>>> <<<Deep-Enhanced Fusion>>> In shallow fusion and deep fusion, we used the convolutional operation to summarize the context. However, one drawback of them is that the original word embedding might be blurred by blending the words around it, i.e., applying the convolutional operation on its context. For better modeling the original word and its context, we enhanced the deep fusion model with original word embedding information, with an intuition of “enriching word representation with contextual information while preserving its basic meaning”. Figure FIGREF17 illustrates our motivations. Formally, the Equation 9 to 11 can be further rewritten into where we add original word embedding $e_t$ after the CNN operation, to “enhance” the original word information while not losing the contextual information that has learned from CNNs. <<</Deep-Enhanced Fusion>>> <<</Contextual Recurrent Unit>>> <<</Our approach>>> <<<Applications>>> The proposed CRU model is a general neural recurrent unit, so we could apply it to various NLP tasks. As we wonder whether the CRU model could give improvements in both sentence-level modeling and document-level modeling tasks, in this paper, we applied the CRU model to two NLP tasks: sentiment classification and cloze-style reading comprehension. In the sentiment classification task, we build a simple neural model and applied our CRU. In the cloze-style reading comprehension task, we first present some modifications to a recent reading comprehension model, called AoA Reader BIBREF10, and then replace the GRU part by our CRU model to see if our model could give substantial improvements over strong baselines. <<<Sentiment Classification>>> In the sentiment classification task, we aim to classify movie reviews, where one movie review will be classified into the positive/negative or subjective/objective category. A general neural network architecture for this task is depicted in Figure FIGREF20. First, the movie review is transformed into word embeddings. And then, a sequence modeling module is applied, in which we can adopt LSTM, GRU, or our CRU, to capture the inner relations of the text. In this paper, we adopt bidirectional recurrent units for modeling sentences, and then the final hidden outputs are concatenated. After that, a fully connected layer will be added after sequence modeling. Finally, the binary decision is made through a single $sigmoid$ unit. As shown, we employed a straightforward neural architecture to this task, as we purely want to compare our CRU model against other sequential models. The detailed experimental result of sentiment classification will be given in the next section. <<</Sentiment Classification>>> <<<Reading Comprehension>>> Besides the sentiment classification task, we also tried our CRU model in cloze-style reading comprehension, which is a much complicated task. In this paper, we strengthened the recent AoA Reader BIBREF10 and applied our CRU model to see if we could obtain substantial improvements when the baseline is strengthened. <<<Task Description>>> The cloze-style reading comprehension is a fundamental task that explores relations between the document and the query. Formally, a general cloze-style query can be illustrated as a triple $\langle {\mathcal {D}}, {\mathcal {Q}}, {\mathcal {A}} \rangle $, where $\mathcal {D}$ is the document, $\mathcal {Q}$ is the query and the answer $\mathcal {A}$. Note that the answer is a single word in the document, which requires us to exploit the relationship between the document and query. <<</Task Description>>> <<<Modified AoA Reader>>> In this section, we briefly introduce the original AoA Reader BIBREF10, and illustrate our modifications. When a cloze-style training triple $\langle \mathcal {D}, \mathcal {Q}, \mathcal {A} \rangle $ is given, the Modified AoA Reader will be constructed in the following steps. First, the document and query will be transformed into continuous representations with the embedding layer and recurrent layer. The recurrent layer can be the simple RNN, GRU, LSTM, or our CRU model. To further strengthen the representation power, we show a simple modification in the embedding layer, where we found strong empirical results in performance. The main idea is to utilize additional sparse features of the word and add (concatenate) these features to the word embeddings to enrich the word representations. The additional features have shown effective in various models BIBREF7, BIBREF17, BIBREF11. In this paper, we adopt two additional features in document word embeddings (no features applied to the query side). $\bullet $ Document word frequency: Calculate each document word frequency. This helps the model to pay more attention to the important (more mentioned) part of the document. $\bullet $ Count of query word: Count the number of each document word appeared in the query. For example, if a document word appears three times in the query, then the feature value will be 3. We empirically find that instead of using binary features (appear=1, otherwise=0) BIBREF17, indicating the count of the word provides more information, suggesting that the more a word occurs in the query, the less possible the answer it will be. We replace the Equation 16 with the following formulation (query side is not changed), where $freq(x)$ and $CoQ(x)$ are the features that introduced above. Other parts of the model remain the same as the original AoA Reader. For simplicity, we will omit this part, and the detailed illustrations can be found in BIBREF10. <<</Modified AoA Reader>>> <<</Reading Comprehension>>> <<</Applications>>> <<<Experiments: Sentiment Classification>>> <<<Experimental Setups>>> In the sentiment classification task, we tried our model on the following public datasets. [leftmargin=*] MR Movie reviews with one sentence each. Each review is classified into positive or negative BIBREF18. IMDB Movie reviews from IMDB website, where each movie review is labeled with binary classes, either positive or negative BIBREF19. Note that each movie review may contain several sentences. SUBJ$^1$ Movie review labeled with subjective or objective BIBREF20. The statistics and hyper-parameter settings of these datasets are listed in Table TABREF33. As these datasets are quite small and overfit easily, we employed $l_2$-regularization of 0.0001 to the embedding layer in all datasets. Also, we applied dropout BIBREF21 to the output of the embedding layer and fully connected layer. The fully connected layer has a dimension of 1024. In the MR and SUBJ, the embedding layer is initialized with 200-dimensional GloVe embeddings (trained on 840B token) BIBREF22 and fine-tuned during the training process. In the IMDB condition, the vocabulary is truncated by descending word frequency order. We adopt batched training strategy of 32 samples with ADAM optimizer BIBREF23, and clipped gradient to 5 BIBREF24. Unless indicated, the convolutional filter length is set to 3, and ReLU for the non-linear function of CNN in all experiments. We use 10-fold cross-validation (CV) in the dataset that has no train/valid/test division. <<</Experimental Setups>>> <<<Results>>> The experimental results are shown in Table TABREF35. As we mentioned before, all RNNs in these models are bi-directional, because we wonder if our bi-CRU could still give substantial improvements over bi-GRU which could capture both history and future information. As we can see that, all variants of our CRU model could give substantial improvements over the traditional GRU model, where a maximum gain of 2.7%, 1.0%, and 1.9% can be observed in three datasets, respectively. We also found that though we adopt a straightforward classification model, our CRU model could outperform the state-of-the-art systems by 0.6%, 0.7%, and 0.8% gains respectively, which demonstrate its effectiveness. By employing more sophisticated architecture or introducing task-specific features, we think there is still much room for further improvements, which is beyond the scope of this paper. When comparing three variants of the CRU model, as we expected, the CRU with deep-enhanced fusion performs best among them. This demonstrates that by incorporating contextual representations with original word embedding could enhance the representation power. Also, we noticed that when we tried a larger window size of the convolutional filter, i.e., 5 in this experiment, does not give a rise in the performance. We plot the trends of MR test set accuracy with the increasing convolutional filter length, as shown in Figure FIGREF36. As we can see that, using a smaller convolutional filter does not provide much contextual information, thus giving a lower accuracy. On the contrary, the larger filters generally outperform the lower ones, but not always. One possible reason for this is that when the filter becomes larger, the amortized contextual information is less than a smaller filter, and make it harder for the model to learn the contextual information. However, we think the proper size of the convolutional filter may vary task by task. Some tasks that require long-span contextual information may benefit from a larger filter. We also compared our CRU model with related works that combine CNN and RNN BIBREF3, BIBREF4, BIBREF5. From the results, we can see that our CRU model significantly outperforms previous works, which demonstrates that by employing deep fusion and enhancing the contextual representations with original embeddings could substantially improve the power of word representations. On another aspect, we plot the trends of IMDB test set accuracy during the training process, as depicted in Figure FIGREF37. As we can see that, after iterating six epochs of training data, all variants of CRU models show faster convergence speed and smaller performance fluctuation than the traditional GRU model, which demonstrates that the proposed CRU model has better training stability. <<</Results>>> <<</Experiments: Sentiment Classification>>> <<<Experiments: Reading Comprehension>>> <<</Experiments: Reading Comprehension>>> <<<Qualitative Analysis>>> In this section, we will give a qualitative analysis on our proposed CRU model in the sentiment classification task. We focus on two categories of the movie reviews, which is quite harder for the model to judge the correct sentiment. The first one is the movie review that contains negation terms, such as “not”. The second type is the one contains sentiment transition, such as “clever but not compelling”. We manually select 50 samples of each category in the MR dataset, forming a total of 100 samples to see if our CRU model is superior in handling these movie reviews. The results are shown in Table TABREF45. As we can see that, our CRU model is better at both categories of movie review classification, demonstrating its effectiveness. Among these samples, we select an intuitive example that the CRU successfully captures the true meaning of the sentence and gives the correct sentiment label. We segment a full movie review into three sentences, which is shown in Table TABREF46. Regarding the first and second sentence, both models give correct sentiment prediction. While introducing the third sentence, the GRU baseline model failed to recognize this review as a positive sentiment because there are many negation terms in the sentence. However, our CRU model could capture the local context during the recurrent modeling the sentence, and the phrases such as “not making fun” and “not laughing at” could be correctly noted as positive sentiment which will correct the sentiment category of the full review, suggesting that our model is superior at modeling local context and gives much accurate meaning. <<</Qualitative Analysis>>> <<<Conclusion>>> In this paper, we proposed an effective recurrent model for modeling sequences, called Contextual Recurrent Units (CRU). We inject the CNN into GRU, which aims to better model the local context information via CNN before recurrently modeling the sequence. We have tested our CRU model on the cloze-style reading comprehension task and sentiment classification task. Experimental results show that our model could give substantial improvements over various state-of-the-art systems and set up new records on the respective public datasets. In the future, we plan to investigate convolutional filters that have dynamic lengths to adaptively capture the possible spans of its context. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nRelated Works\nOur approach\nGated Recurrent Unit\nContextual Recurrent Unit\nShallow Fusion\nDeep Fusion\nDeep-Enhanced Fusion\nApplications\nSentiment Classification\nReading Comprehension\nTask Description\nModified AoA Reader\nExperiments: Sentiment Classification\nExperimental Setups\nResults\nExperiments: Reading Comprehension\nQualitative Analysis\nConclusion" ], "type": "outline" }
2001.11899
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> An efficient automated data analytics approach to large scale computational comparative linguistics <<<Abstract>>> This research project aimed to overcome the challenge of analysing human language relationships, facilitate the grouping of languages and formation of genealogical relationship between them by developing automated comparison techniques. Techniques were based on the phonetic representation of certain key words and concept. Example word sets included numbers 1-10 (curated), large database of numbers 1-10 and sheep counting numbers 1-10 (other sources), colours (curated), basic words (curated). ::: To enable comparison within the sets the measure of Edit distance was calculated based on Levenshtein distance metric. This metric between two strings is the minimum number of single-character edits, operations including: insertions, deletions or substitutions. To explore which words exhibit more or less variation, which words are more preserved and examine how languages could be grouped based on linguistic distances within sets, several data analytics techniques were involved. Those included density evaluation, hierarchical clustering, silhouette, mean, standard deviation and Bhattacharya coefficient calculations. These techniques lead to the development of a workflow which was later implemented by combining Unix shell scripts, a developed R package and SWI Prolog. This proved to be computationally efficient and permitted the fast exploration of large language sets and their analysis. <<</Abstract>>> <<<Introduction>>> The need to uncover presumed underlying linguistic evolutionary principles and analyse correlation between world's languages has entailed this research. For centuries people have been speculating about the origins of language, however this subject is still obscure. Non-automated linguistic analysis of language relationships has been complicated and very time-consuming. Consequently, this research aims to apply a computational approach to compare human languages. It is based on the phonetic representation of certain key words and concept. This comparison of word similarity aims to facilitate the grouping of languages and the analysis of the formation of genealogical relationship between languages. This report contains a thorough description of the proposed methods, developed techniques and discussion of the results. During this projects several collections of words were gathered and examined, including colour words and numbers. The methods included edit distance, phonetic substitution table, hierarchical clustering with a cut and other analysis methods. They all aimed to provide an insight regarding both technical data summary and its visual representation. <<</Introduction>>> <<<Background>>> <<<Human languages>>> For centuries, people have speculated over the origins of language and its early development. It is believed that language first appeared among Homo Sapiens somewhere between 50,000 and 150,000 years ago BIBREF0. However, the origins of human language are very obscure. To begin with, it is still unknown if the human language originated from one original and universal Proto-Language. Alfredo Trombetti made the first scientific attempt to establish the reality of monogenesis in languages. His investigation concluded that it was spoken between 100,000 and 200,000 years ago, or close to the first emergence of Homo Sapiens BIBREF1. However it was never accepted comprehensively. The concept of Proto-Language is purely hypothetical and not amenable to analysis in historical linguistics. Furthermore, there are multiple theories of how language evolved. These could be separated into two distinctly different groups. Firstly, some researchers claim that language evolved as a result of other evolutionary processes, essentially making it a by-product of evolution, selection for other abilities or as a consequence of yet unknown laws of growth and form. This theory is clearly established in Noam Chomsky BIBREF2 and Stephen Jay Gould's work BIBREF3. Both scientists hypothesize that language evolved together with the human brain, or with the evolution of cognitive structures. They were used for tool making, information processing, learning and were also beneficial for complex communication. This conforms with the theory that as our brains became larger, our cognitive functions increased. Secondly, another widely held theory is that language came about as an evolutionary adaptation, which is when a population undergoes a change in process over time to survive better. Scientists Steven Pinker and Paul Bloom in “Natural Language and Natural Selection” BIBREF4 theorize that a series of calls or gestures evolved over time into combinations, resulting in complex communication. Today there are 7,111 distinct languages spoken worldwide according to the 2019 Ethnologue language database. Many circumstances such as the spread of old civilizations, geographical features, and history determine the number of languages spoken in a particular region. Nearly two thirds of languages are from Asia and Africa. The Asian continent has the largest number of spoken languages - 2,303. Africa follows closely with 2,140 languages spoken across continent. However, given the population of certain areas and colonial expansion in recent centuries, 86 percent of people use languages from Europe and Asia. It is estimated that there is around 4.2 billion speakers of Asian languages and around 1.75 billion speakers of European languages. Moreover, Pacific languages have approximately 1,000 speakers each on average, but altogether, they represent more than a third of our world’s languages. Papua New Guinea is the most linguistically diverse country in the world. This is possibly due to the effect of its geography imposing isolation on communities. It has over 840 languages spoken, with twelve of them lacking many speakers. It is followed by Indonesia, which has 709 languages spoken across the country. <<<Indo-European languages and Kurgan Hypothesis>>> Indo-European languages is a language family that represents most of the modern languages of Europe, as well as specific languages of Asia. Indo-European language family consist of several hundreds of related languages and dialects. Consequently, it was an interest of the linguists to explore the origins of the Indo-European language family. In the mid-1950s, Marija Gimbutas, a Lithuanian-American archaeologist and anthropologist, combined her substantial background in linguistic paleontology with archaeological evidence to formulate the Kurgan hypothesis BIBREF5. This hypothesis is the most widely accepted proposal to identify the homeland of Proto-Indo-European (PIE) (ancient common ancestor of the Indo-European languages) speakers and to explain the rapid and extensive spread of Indo-European languages throughout Europe and Asia BIBREF6 BIBREF7. The Kurgan hypothesis proposes that the most likely speakers of the Proto-Indo-European language were people of a Kurgan culture in the Pontic steppe, by the north side of the Black Sea. It also divides the Kurgan culture into four successive stages (I, II, III, IV) and identifies three waves of expansions (I, II, III). In addition, the model suggest that the Indo-European migration was happening from 4000 to 1000 BC. See figure FIGREF4 for visual illustration of Indo-European migration. Today there are approximately 445 living Indo-European languages, which are spoken by 3.2 billion people, according to Ethnologue. They are divided into the following groups: Albanian, Armenian, Baltic, Slavic, Celtic, Germanic, Hellenic, Indo-Iranian and Italic (Romance) FIGREF3 BIBREF8. <<</Indo-European languages and Kurgan Hypothesis>>> <<<Brittonic languages>>> Brittonic or British Celtic languages derive from the Common Brittonic language, spoken throughout Great Britain south of the Firth of Forth during the Iron Age and Roman period. They are classified as Indo-European Celtic languages BIBREF10. The family tree of Brittonic languages is showed in Table TABREF6. Common Brittonic is ancestral to Western and Southwestern Brittonic. Consequently, Cumbric and Welsh, which is spoken in Wales, derived from Western Brittonic. Cornish and Breton, spoken in Cornwall and Brittany, respectively, originated from Southwestern side. Today Welsh, Cornish and Breton are still in use. However, it is worth to point out that Cornish is a language revived by second-language learners due to the last native speakers dying in the late 18th century. Some people claimed that the Cornish language is an important part of their identity, culture and heritage, and a revival began in the early 20th century. Cornish is currently a recognised minority language under the European Charter for Regional or Minority Languages. <<</Brittonic languages>>> <<<Sheep Counting System>>> Brittonic Celtic language is an ancestor to the number names used for sheep counting BIBREF11 BIBREF12. Until the Industrial Revolution, the use of traditional number systems was common among shepherds, especially in the fells of the Lake District. The sheep-counting system was referred to as Yan Tan Tethera. It was spread across Northern England and in other parts of Britain in earlier times. The number names varied according to dialect, geography, and other factors. They also preserved interesting indications of how languages evolved over time. The word “yan” or “yen” meaning “one”, in some northern English dialects represents a regular development in Northern English BIBREF13. During the development the Old English long vowel // <ā> was broken into /ie/, /ia/ and so on. This explains the shift to “yan” and “ane” from the Old English ān, which is itself derived from the Proto-Germanic “*ainaz” BIBREF14. In addition, the counting system demonstrates a clear connection with counting on the fingers. Particularly after numbers reach 10, as the best known examples are formed according to this structure: 1 and 10, 2 and 10, up to 15, and then 1 and 15, 2 and 15, up to 20. The count variability would end at 20. It might be due to the fact, that the shepherds, on reaching 20, would transfer a pebble or marble from one pocket to another, so as to keep a tally of the number of scores. <<</Sheep Counting System>>> <<</Human languages>>> <<</Background>>> <<<Aims and Objectives>>> <<<Overall Aim>>> The aim of this research was to develop computational methods to compare human languages based on the phonetic form of single words (i.e. not exploiting grammar). This comparison of word similarity aims to facilitate the grouping of languages, the identification of the the presumed underlying linguistic evolutionary principles and the analysis of the formation of genealogical relationship between languages. <<</Overall Aim>>> <<<Specific Objectives>>> Devise a way to encode the phonetic representation of words, using: an in-house encoding, an IPA (International Phonetic Alphabet). Develop methods to analyze the comparative relationships between languages using: descriptive and inferential statistics, clustering, visualisation of the data, and analysis of the results. Implement a repeatable process for running the analysis methods with new data. Analyse the correlation between geographical distance and language similarity (linguistic distance), and investigate if it explains the evolutionary distance. Examine which words exhibit more or less variation and the likely causes of it. Explore which words are preserved better across the same language group and possible reasons behind it. Explore which language group preserves particular words more in comparison to others and potential reasons behind it. Determine if certain language groups are correct and exploit the possibility of forming new ones. <<</Specific Objectives>>> <<</Aims and Objectives>>> <<<Data>>> <<<Language files>>> Language file or database is a set of languages, each of which is associated with an ordered list of words. All lists of words for a particular data set have the same length. For example: numbers(romani,[iek,dui,trin,shtar,panj,shov,efta,oksto,ena,desh]). numbers(english,[wun,too,three,foor,five,siks,seven,eit,nine,ten]). numbers(french,[un,de,troi,katre,sink,sis,set,wuit,neuf,dis]). Words and languages are encoded in this format for later use of Prolog. In Prolog each “numbers” line is a fact, which has 2 arguments; the first is the language name and the second is a list (indicated in between square brackets) of words. Words can be written down in their original form or encoded phonetically (as shown in the example). Where synonyms for a word are known, then the word itself is represented by a list of the synonym words. In the example below, Lithuanian, Russian and Italian have two words for the English `blue': words(english,[black,white,red,yellow,blue,green]). words(lithuanian,[juoda,balta,raudona,geltona,[melyna,zhydra],zhalia]). words(russian,[chornyj,belyj,krasnyj,zholtyj,[sinij,goluboj],zeljonyj]). words(italian,[nero,bianco,rosso,giallo,[blu,azzurro],verde]). The main focus of this research was exploring words phonetically. Consequently, special encoding was used. It consisted of encoding phonemes by using only one letter; incorporating capital letters for encoding different sounds (See table TABREF21). Table TABREF22 summarises the language files that are obtained at the moment. <<</Language files>>> <<<Sheep>>> <<<Sheep counting words>>> Sheep counting numbers were extracted from “Yan Tan Tethera” BIBREF12 page on Wikipedia and placed in a Prolog database. Furthermore, data was encoded phonetically using the set of rules provided by Prof. David Gilbert. In the given source, number sets ranged from 1-3 to 1-20 for different dialects. The initial step was to reduce the size of the data to sets of numbers 1-10. This way aiming: to have Prolog syntax without errors (avoided “-”, “ ” as they were common symbols after numbers reached 10); to avoid the effects of different methods of forming and writing down numbers higher than 10. (Usually they were formed from numbers 1-10 and a base. However, they were written in a different order, making the comparison inefficient.) In addition, the Wharfedale dialect was removed since only numbers 1-3 were provided; the Weardale dialect was eliminated as it had a counting system with base 5. Consequently, the final version of sheep counting numbers database consisted of 23 observations (dialects) with numbers 1-10. <<</Sheep counting words>>> <<<Geographical data>>> In order to enable the analysis of linguistic and geographical distance relationship, a geographical distance database was created. It was done by firstly creating a personalized Google Map with 23 pins, noting the places of different dialects (they were located approximately in the middle of the area) (Figure: FIGREF28). Subsequently, pairwise distances were calculated between all of them (taking walking distance) and added to the database for further use. <<</Geographical data>>> <<<Analysis of average and subset linguistic distance>>> After applying functions “mean_SD” (Figure: FIGREF72) and “densityP” (Figure: FIGREF73) to the linguistic distances of every word (numbers 1 to 10) in R, the following observations were made. First of all, the most preserved number across all dialects was “10” with distance mean 0.109 and standard deviation 0.129. Numbers “1”, “2”, “3”, “4” had comparatively small distances, which might be the result of being used more frequently. On the other hand, number “6” showed more dissimilarities between dialects than other numbers. The mean score was 0.567 and standard deviation - 0.234. The product scores of mean and standard deviation helped to evaluate both at the same time. Moreover, density plots showed significant fluctuation and tented to have a few peaks. But in general, conformed with the statistics provided by “mean_SD”. <<</Analysis of average and subset linguistic distance>>> <<</Sheep>>> <<<Colours>>> Colour words were extracted from “Colour words in many languages” BIBREF15 page on Omniglot, collected from people and dictionaries. In addition, data was encoded phonetically using the set of rules provided by Prof. David Gilbert. The latest version of the database consisted of 42 different languages, each containing 6 colours: black, white, red, yellow, blue, green. For the purposes of analysis the following groups were created: All languages - “ColoursAll” (42 languages) Indo-European languages - “ColoursIE” (39 languages) Germanic languages - “ColoursPGermanic” (10 languages) Romance languages - “ColoursPRomance” (11 languages) Germanic and Romance languages - “ColoursPG_R” (21 languages) <<<Mean and Standard Deviation>>> When examining the data calculated for “ColoursAll” none of the colours showed a clear tendency to be more preserved than others (Figure: FIGREF83). All colours had large distances and comparatively small standard deviation when compared with other groups. Small standard deviation was most likely the result of most of the distances being large. Indo-European language group scores were similar to “ColoursAll”, exhibiting slightly larger standard deviation (Figure: FIGREF84). Conclusion could be drawn that words for color “Red” are more similar in this group. The mean score of linguistic distances was 0.61, and SD was equal to 0.178, when average mean was 0.642 and SD 0.212. However, no colour stood out distinctly. Germanic and Romance language groups revealed more significant results. Germanic languages preserved the colour “Green” considerably (Figure: FIGREF85). The mean and SD was 0.168 and 0.129, when on average mean was reaching 0.333 and SD 0.171. In addition, the colour “Blue” had favorable scores as well - mean was 0.209 and SD was 0.106. Furthermore, Romance languages demonstrated slightly higher means and standard deviations, on average reaching 0.45 and 0.256 (Figure: FIGREF86). Similarly to Germanic, the most preserved colour word in Romance languages was “Green” with a mean of 0.296 and SD of 0.214. It was followed by words for “Black” and then for “Blue”, both being quite similar. <<</Mean and Standard Deviation>>> <<<Density Plots>>> Density plots of all languages and Indo-European languages were similar: both having multiple peaks with the most density around scores of 0.75 (big linguistic distance). Moreover, Germanic languages density distribution consisted of two peaks for words “White”, “Blue” and “Green” (Figure: FIGREF88). This could possibly be the result of certain weighting in the Phonetic Substitution Table or indicate possible further grouping of languages. The color “Black” had more normal distribution and smoother bell shape compared to others. Furthermore, Romance languages also obtained density plots with two peaks for words “White”, “Yellow”, “Blue” (Figure: FIGREF89). In contrast, “Black”, “Red” and “Green” distributions were quite smooth. In order to experiment how the Phonetic Substitution Table affects the linguistic distances, “densityP” function was applied to the linguistic distances calculated with “GabyTable” substitution table. The aim was to eliminate the two peaks in the Germanic language group for word “Green”. In Germanic languages word for green tended to begin with either “gr” or “khr” (encoded as “Kr”) - both sounding similar phonetically. However, in the original substitution table, a weight for changing “K”(kh) to “g” (and the other way around) did not exist. Consequently, a new table was implemented with this substitution. This change resulted in notably smaller linguistic distances - the mean for the word “Green” was 0.099. However, it did not solve the occurrence of two peaks. The density of “Green” again had two main peaks, but differently distributed compared to the previous case. <<</Density Plots>>> <<<Bhattacharya Coefficients>>> Bhattacharya coefficients were calculated within each group for different pairs of colours. This helped to evaluate which colours were closer in distribution. In addition, hierarchical clustering was done with Bhattacharya coefficients (find the dendrograms in the Appendix SECREF123). However, the potential meaning behind the results was not fully examined. Another potential use of Bhattacharya coefficients is their application to the same word from a different language group. As a result, the preservation of particular words can be analysed across language groups, enabling to compare and evaluate potential reasons behind it. <<</Bhattacharya Coefficients>>> <<</Colours>>> <<<IPA>>> “Automatic Phonemic Transcriber” BIBREF16 was used to create 3 IPA encoded databases: “BasicWords” - words in their original form were taken from Prof. David Gilbert's database for basic words (including: sun, moon, rain, water, fire, man, woman, mother, father, child, yes, no, blood). “Numbers” - numbers from 1-10 in their original form were taken from Prof. David Gilbert's small database of numbers. “Colours” - words were taken from the above mentioned database (including words: black, white, red, yellow, blue, green). Each of the above mentioned databases consisted of 3 languages: English, Danish and German (these were the languages the Automatic Phonemic Transcriber provided) all encoded in IPA. As the research progressed, the difficulty of obtaining IPA encoding for different languages was faced. This study could not find a cross-linguistic IPA dictionary that included more than 3 languages. Consequently, the question of its existence was raised. <<</IPA>>> <<</Data>>> <<<Methodology>>> There are two main processes to be carried out. The first process (Figure: FIGREF43) aims to analyse a databases of words; explore which words exhibit more or less variation, which words are more preserved; examine how languages could be grouped based on linguistic distances of words. It begins with the calculation of pairwise linguistic distances for the given database of words. A Phonetic Substitution Table is used to assign weights during the calculation and could possibly be modified. The result is a new distance table which is analysed in the following ways: Performing “densityP” function. The outcome is density plots for every word of a database. Performing Hierarchical clustering. After, the “Best cut” is determined, which is either the best Silhouette value after calculation of all possible cases, or a forced number K which is a number of words per language in the language file Calculating Bhattacharya coefficients. Performing “mean_SD” function. The second process (Figure: FIGREF44) targets to investigate the relationship between two sets of distance data. In this research, it was applied to analyse the relationship between linguistic and geographical distances. It starts with producing two pairwise distance tables: one of them is calculated geographical distances, another one is calculated linguistic distances. Then the data from both tables is combined into a data frame for regression analysis in R. The outcome is an object of the class “lm” (result of R function “lm” being used), that is used for data analysis, and a scatter plot with a regression line for visual analysis. Both processes have been automated, see Section SECREF66. <<</Methodology>>> <<<Methods>>> <<<Edit Distance>>> For the purposes of this research Edit distance (a measure in computer science and computational linguistics for determining the similarity between 2 strings) was calculated based on Levenshtein distance metric. This metric between two strings is the minimum number of single-character edits, operations including: insertions, deletions or substitutions. The Levenshtein distance between two strings a,b (of length $\mid a\mid $ and $\mid b\mid $ respectively) is given by $lev_{a,b}(\mid a \mid , \mid b \mid )$ where where $1_{(a_{i}\ne b_{j})}$ is the indicator function equal to 0 when $a_{i}=b_{j}$ and equal to 1 otherwise. A normalised edit distance between two strings can be computed by Edit distance was implemented by Prof. David Gilbert using dynamic programming in SWI Prolog BIBREF17. The program was used to compare two words with the same meaning from different languages. When pairwise comparing two words where either one or both comprise synonyms, all the alternatives for each word one one language are compared with the corresponding (set) of words in the other language, and the closest match is selected. In addition, all to all comparisons were made, i.e. edit distance was calculated for words having different meaning as well. Finally, the edit distance for two languages represented by two lists of equal length of corresponding words was computed by taking the average of the edit distance for each (corresponding) pair of words. An example of pairwise alignments is for the pair of words overa-hofa, where 3 alignments are produced with the use of gap penalty $=1$ and substitution penalties $f \leftrightarrow v = 0.2$, $e \leftrightarrow o = 0.2$ and all other mismatches 1: [[-,h],[o,o],[v,f],[e,-],[r,-],[a,a]] [[o,-],[v,h],[e,o],[r,f],[a,a]] [[o,h],[v,-],[e,o],[r,f],[a,a]] each with the raw edit distance of 3.2, and the normalised edit distance of For the sake of clarity we can write the first alignment for example as where only 3 letters are directly aligned. <<</Edit Distance>>> <<<Phonetic Substitution Table>>> In order to give a specified weight for different operations (insertion, deletion and substitution) Phonetic Substitution Table was created by incorporating Grimm's law BIBREF18 and extending it in-house. Grimm's Law, principle of relationships in Indo-European languages, describes a process of the regular shifting of consonants in groups. It consist of 3 phases in terms of chain shift BIBREF19. Proto-Indo-European voiceless stops change into voiceless fricatives. Proto-Indo-European voiced stops become voiceless stops. Proto-Indo-European voiced aspirated stops become voiced stops or fricatives. This is an abstract representation of the chain shift: $bh > b > p > $ $dh > d > t > $ $gh > g > k > x$ $gwh > gw > kw > xw$ Figure FIGREF54 illustrates how further consonant shifting following Grimm's law affected words from different languages BIBREF20. Phonetic substitution table was extended in-house by adding more shifts. In addition, it was also written in the way to work with the special encoding described in SECREF20 section. Find the full table “editable” in Appendix SECREF11. Another phonetic substitution table, called “editableGaby”, was made (See Appendix SECREF11). It was extended by adding pairs like “dzh” and “zh”; “dzh” and “ch”; “kh” and “g”; as well as “H”(sound of e.g. spannish/portuguese “j”) with “kh”, “g”, “k”, “h”. In addition, some of the weights were changed for certain pairs for experimental purposes. <<</Phonetic Substitution Table>>> <<<Hierarchical Clustering>>> <<<Using the OC program>>> The OC program BIBREF21 is general purpose hierarchical cluster analysis program. It outputs a list of the clusters and optionally draws a dendrogram in PostScript. It requires complete upper diagonal distance or similarity matrix as an input. <<</Using the OC program>>> <<<Using R>>> Hierarchical clustering in R was performed by incorporating clustering together with Silhouette value calculation and cut performance. In order to fulfill agglomerative hierarchical clustering more efficiently, we created a set of functions in R: “sMatrix” - Makes a symmetric matrix from a specified column. The function takes a specifically formatted data frame as an input and returns a new data frame. Having a symmetric matrix is necessary for “silhouetteV” and “hcutVisual” functions. “silhouetteV” - Calculates Silhouette values with “k” value varying from 2 to n-1 (n being the number of different languages/number of rows/number of columns in a data frame). The function takes a symmetric distance matrix as an input and returns a new data frame containing all Silhouette values. “hcutVisual” - Performs hierarchical clustering and makes a cut with the given K value. Makes Silhouette plot, Cluster plot and dendrogram. Returns a “hcut” object from which cluster assignment, silhouette information, etc. can be extracted. It is important to note that K-Means clustering was not performed as the algorithm is meant to operate over a data matrix, not a distance matrix. <<</Using R>>> <<</Hierarchical Clustering>>> <<<Further analysis with R>>> Another set of functions was created to analyse collected data further. They target to ease the comparison of the mean, standard deviation, Bhattacharya coefficient within the words or language groups. Including: “mean_SD” - Calculates mean, standard deviation, product of the mean and the SD multiplication for every column of the input. Visualises all three values for each column and places it in one plot, which is returned. “densityP” - Makes a density plot for every column of the input and puts it in one plot, which is returned. “tscore” - Calculates t-score for every value in the given data frame. (T-score is a standard score Z shifted and scaled to have a mean of 50 and a standard deviation of 10) “bhatt” - Calculates Bhattacharya coefficient (the probability of the two distributions being the same) for every pair of columns in the data frame. The function returns a new data frame. <<</Further analysis with R>>> <<<Process automation>>> In order to optimise and perform analysis in the most time-efficient manner processes of comparing languages were automated. It was done by creating two shell scripts and an R script for each of them. The first shell script named “oc2r_hist.sh” was made to perform hierarchical clustering with the best silhouette value cut. This script takes a language database as an input and performs pairwise distance calculation. It then calls “hClustering.R” R script, which reads in the produced OC file, performs hierarchical clustering and calculates all possible silhouette values. Finally, it makes a cut with the number of clusters, which provides the highest silhouette value. To enable this process the R script was written by incorporating the functions described in section SECREF57. The outcome of this program is a table of clusters, a dendrogram, clusters' and silhouette plots. The second shell script called “wordset_make_analyse.sh” was made to perform calculations of mean, standard deviation, Bhattacharya scores and produce density plots. This script takes a language database as an input and performs pairwise distance calculations for each word of the database. It then calls “rAnalysis.R” R script, which reads in the produced OC file and performs further calculations. Firstly, it calculates mean, standard deviation and the product of both of each word and outputs a histogram and a table of scores. Secondly, it produces density plots of each word. Finally, it converts scores into T-Scores and calculates Bhattacharya coefficient for every possible pair of words. It then outputs a table of scores. To enable this process the R script was written by incorporating the functions described in section SECREF61. Finally, both of the scripts were combined to minimise user participation. <<</Process automation>>> <<</Methods>>> <<<Results>>> <<<Hierarchical clustering>>> Hierarchical clustering was performed with the best Silhouette value cut (Figure FIGREF76). The Silhouette value suggested making 9 clusters. In this grouping, the most interesting observation was that Welsh, Breton and Cornish languages were placed together. It conforms with the fact that all 3 languages descended directly from the Common Brittonic language spoken throughout Britain before the English language became dominant. <<<All to all comparison analysis>>> To enable analysis of clusters of all to all comparison, hierarchical clustering was performed. This was done by two different approaches: calculating a silhouette value and choosing the number of clusters accordingly; forcing a function to make 10 clusters due to having numbers from 1 to 10 in the sheep counting database. By using function “silhouetteV” silhouette values were calculated for all possible $k$ values. The returned data frame indicated the best number of clusters being 70 (see Appendix SECREF120 for dendrogram and cluster plot). The suggested clusters were not distinguished with very high clarity in terms of numbers 1-10 perfectly, but they were comparatively good. A pattern that numbers, which had lower mean and standard deviation scores, would result in purer clusters was noticed. Clusters of numbers “1”, “2”, “3”, “4”, “5” and “10” were not as mixed as “6”, “7”, “8”, “9”. Another way of looking at all to all comparison data was by producing 10 clusters. It was done by using “hcutVisual” and “cPurity” function (see Appendix SECREF120 cluster plot). The results showed high impurities of clusters (Figure: FIGREF78). Two out of ten clusters were pure, both containing number “5”. Another relatively pure cluster was composed of number “10” and two entries of number “2”. The rest consisted of up to 7 different numbers. This shows that sheep counting numbers in different dialects are too different to form 10 clusters containing each number. However, considering the possibility that dialects were grouped and clustering was performed to the smaller groups, they would have reasonably pure clusters. Exploring this grouping options could be a subject for further work. <<</All to all comparison analysis>>> <<<Linguistic and Geographical distance relationship>>> In order to investigate the correlation between linguistic and geographical distance, “lm” function was performed and a scatter plot was created. The regression line in the scatter plot suggested that the relationship existed. However, the R-squared value, extracted from the “lm” object, was equal to 0.131. This indicated that relationship existed, but was not significant. One assumption made was that Cornish, Breton and Welsh dialects might have had a weakening effect on the relationship, since they had large linguistic distances compared to other dialects. However this assumption could not be validated as the correlation was less significant after eliminating them. This highlights that although these dialects had large linguistic distance scores, they also had big geographical distances that do not contradict the relationship. In addition, comparison was done between linguistic distance and $Log_{10}(\text{GeographicalDistance})$. This resulted in an even weaker relationship with R-squared being 0.097. <<</Linguistic and Geographical distance relationship>>> <<</Hierarchical clustering>>> <<<Small Numbers>>> <<<All to all comparison>>> Analysis was carried out in two ways. First of all, hierarchical clustering was performed with the best silhouette value cut. For this data set best silhouette value was 0.48 and it suggested making 329 clusters. Clusters did not exhibit high purity. However, the ones that did quite clearly corresponded to unique subgroups of language families. Another way of looking at all to all comparison data was by producing 10 clusters. The anticipated outcome was members being distinguished by numbers, forming 10 clean clusters. However, all the clusters were very impure and consisting multiple different numbers. This might be due to different languages having phonetically similar words for different words, in this case. All to all pairwise comparison could be an advantageous tool when used for language family branches or smaller, but related subsets. It could validate if languages belong to a certain group. <<</All to all comparison>>> <<</Small Numbers>>> <<</Results>>> <<<Conclusions>>> This project has aimed to develop computational methods to analyse and understand connections between human languages. The project included collecting words from different languages in order to form new databases, forming rules for phonetic encoding of words and adjusting phonetic substitution table. Several computational methods of calculating pairwise distance between two words were taken, including average, subset and all to all words distance calculation. It was done by incorporating edit distance and phonetic substitution table, and implementing it in SWI Prolog. This was followed by detailed analysis of distance scores, which was conducted by the specific automated routines and developed R functions. They enabled performing hierarchical clustering with a cut either according to silhouette value, or to specified K value. They provided summary of mean, standard deviation and other statistics, like Bhattacharya scores. All these techniques delivered a thorough analysis of data and the automation of processes ensured they were used efficiently. The resulting outcome of analysis of old sheep counting systems in different English dialects was the observation that numbers “1”,“2”,“3”,“4” and “10” were more uniform within different dialects than others, posing that they might have been the most frequently used ones. Analysis of all to all comparison did not provide pure clusters and shows that sheep counting numbers in different dialects are too different to form 10 clusters containing each number. This suggests that dialects should be grouped into subsets. Furthermore, hierarchical clustering with the best silhouette cut suggested the potential 9 groups, which consist members with the most similar counting words. Surprisingly, it was not entirely based on location. This corresponded with the difficulty of finding relationship between geographic and linguistic distance, the conducted tests showed it was insignificant. Analysis of colour words revealed that within Indo-European languages words for colour red were moderately more preserved. Both Germanic and Romance language groups tended to have considerably more uniform words for green and blue colours. In addition, Romance language group preserved colour black reasonably well. Analysis of linguistic distances distribution showed multiple peaks within words for various language groups, suggesting that further language grouping could be done. Furthermore, the resulting outcome of hierarchical clustering with silhouette cut was known and officially accepted language families. Most of the clusters were subgroups of existing language families. Some of them suggested different sub-grouping according to colour words (e.g. Lithuanian was appointed to Slavic languages, while Latvian formed cluster on its own). IPA databases resulted in the same relationships between languages as non-IPA phonetically encoded databases. However, to fully explore the potential of IPA-encoded databases they ought to be expanded and a customized weights table should be created. In conclusion, this project resulted in creation of several felicitous computational techniques to explore many languages and their correlation all at once. <<</Conclusions>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nBackground\nHuman languages\nIndo-European languages and Kurgan Hypothesis\nBrittonic languages\nSheep Counting System\nAims and Objectives\nOverall Aim\nSpecific Objectives\nData\nLanguage files\nSheep\nSheep counting words\nGeographical data\nAnalysis of average and subset linguistic distance\nColours\nMean and Standard Deviation\nDensity Plots\nBhattacharya Coefficients\nIPA\nMethodology\nMethods\nEdit Distance\nPhonetic Substitution Table\nHierarchical Clustering\nUsing the OC program\nUsing R\nFurther analysis with R\nProcess automation\nResults\nHierarchical clustering\nAll to all comparison analysis\nLinguistic and Geographical distance relationship\nSmall Numbers\nAll to all comparison\nConclusions" ], "type": "outline" }
1912.06602
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> That and There: Judging the Intent of Pointing Actions with Robotic Arms <<<Abstract>>> Collaborative robotics requires effective communication between a robot and a human partner. This work proposes a set of interpretive principles for how a robotic arm can use pointing actions to communicate task information to people by extending existing models from the related literature. These principles are evaluated through studies where English-speaking human subjects view animations of simulated robots instructing pick-and-place tasks. The evaluation distinguishes two classes of pointing actions that arise in pick-and-place tasks: referential pointing (identifying objects) and locating pointing (identifying locations). The study indicates that human subjects show greater flexibility in interpreting the intent of referential pointing compared to locating pointing, which needs to be more deliberate. The results also demonstrate the effects of variation in the environment and task context on the interpretation of pointing. Our corpus, experiments and design principles advance models of context, common sense reasoning and communication in embodied communication. <<</Abstract>>> <<<Introduction>>> Recent years have seen a rapid increase of robotic deployment, beyond traditional applications in cordoned-off workcells in factories, into new, more collaborative use-cases. For example, social robotics and service robotics have targeted scenarios like rehabilitation, where a robot operates in close proximity to a human. While industrial applications envision full autonomy, these collaborative scenarios involve interaction between robots and humans and require effective communication. For instance, a robot that is not able to reach an object may ask for a pick-and-place to be executed in the context of collaborative assembly. Or, in the context of a robotic assistant, a robot may ask for confirmation of a pick-and-place requested by a person. When the robot's form permits, researchers can design such interactions using principles informed by research on embodied face-to-face human–human communication. In particular, by realizing pointing gestures, an articulated robotic arm with a directional end-effector can exploit a fundamental ingredient of human communication BIBREF0. This has motivated roboticists to study simple pointing gestures that identify objects BIBREF1, BIBREF2, BIBREF3. This paper develops an empirically-grounded approach to robotic pointing that extends the range of physical settings, task contexts and communicative goals of robotic gestures. This is a step towards the richer and diverse interpretations that human pointing exhibits BIBREF4. This work has two key contributions. First, we create a systematic dataset, involving over 7000 human judgments, where crowd workers describe their interpretation of animations of simulated robots instructing pick-and-place tasks. Planned comparisons allow us to compare pointing actions that identify objects (referential pointing) with those that identify locations (locating pointing). They also allow us to quantify the effect of accompanying speech, task constraints and scene complexity, as well as variation in the spatial content of the scene. This new resource documents important differences in the way pointing is interpreted in different cases. For example, referential pointing is typically robust to the exactness of the pointing gesture, whereas locating pointing is much more sensitive and requires more deliberate pointing to ensure a correct interpretation. The Experiment Design section explains the overall process of data collection, the power analysis for the preregistered protocol, and the content presented to subjects across conditions. The second contribution is a set of interpretive principles, inspired by the literature on vague communication, that summarize the findings about robot pointing. They suggest that pointing selects from a set of candidate interpretations determined by the type of information specified, the possibilities presented by the scene, and the options compatible with the current task. In particular, we propose that pointing picks out all candidates that are not significantly further from the pointing ray than the closest alternatives. Based on our empirical results, we present design principles that formalize the relevant notions of “available alternatives” and “significantly further away”, which can be used in future pointing robots. The Analysis and Design Principles sections explain and justify this approach. <<</Introduction>>> <<<Related work>>> This paper focuses on the fundamental AI challenge of effective embodied communication, by proposing empirically determined generative rules for robotic pointing, including not only referential pointing but also pointing that is location-oriented in nature. Prior research has recognized the importance of effective communication by embracing the diverse modalities that AI agents can use to express information. In particular, perceiving physical actions BIBREF5 is often essential for socially-embedded behavior BIBREF6, as well as for understanding human demonstrations and inferring solutions that can be emulated by robots BIBREF7. Animated agents have long provided resources for AI researchers to experiment with models of conversational interaction including gesture BIBREF8, while communication using hand gestures BIBREF9 has played a role in supporting intelligent human-computer interaction. Enabling robots to understand and generate instructions to collaboratively carry out tasks with humans is an active area of research in natural language processing and human-robot interaction BIBREF10, BIBREF11. Since robotic hardware capabilities have increased, robots are increasingly seen as a viable platform for expressing and studying behavioral models BIBREF12. In the context of human-robot interaction, deictic or pointing gestures have been used as a form of communication BIBREF13. More recent work has developed richer abilities for referring to objects by using pre-recorded, human-guided motions BIBREF14, or using mixed-reality, multi-modal setups BIBREF15. Particular efforts in robotics have looked at making pointing gestures legible, adapting the process of motion planning so that robot movements are correctly understood as being directed toward the location of a particular object in space BIBREF2, BIBREF3. The current work uses gestures, including pointing gestures and demonstrations, that are legible in this sense. It goes on to explore how precise the targeting has to be to signal an intended interpretation. In natural language processing research, it's common to use an expanded pointing cone to describe the possible target objects for a pointing gesture, based on findings about human pointing BIBREF16, BIBREF17. Pointing cone models have also been used to model referential pointing in human–robot interaction BIBREF18, BIBREF19. In cluttered scenes, the pointing cone typically includes a region with many candidate referents. Understanding and generating object references in these situations involves combining pointing with natural language descriptions BIBREF1, BIBREF20. While we also find that many pointing gestures are ambiguous and can benefit from linguistic supplementation, our results challenge the assumption of a uniform pointing cone. We argue for an alternative, context-sensitive model. In addition to gestures that identify objects, we also look at pointing gestures that identify points in space. The closest related work involves navigation tasks, where pointing can be used to discriminate direction (e.g., left vs right) BIBREF21, BIBREF22. The spatial information needed for pick-and-place tasks is substantially more precise. Our findings suggest that this precision significantly impacts how pointing is interpreted and how it should be modeled. <<</Related work>>> <<<Communicating Pick-and-Place>>> This section provides a formalization of pick-and-place tasks and identifies information required to specify them. Manipulator: Robots that can physically interact with their surroundings are called manipulators, of which robotic arms are the prime example. Workspace: The manipulator operates in a 3D workspace $\mathcal {W} \subseteq \mathbb {R}^3$. The workspace also contains a stable surface of interest defined by a plane $S\subset \mathcal {W}$ along with various objects. To represent 3D coordinates of workspace positions, we use $x\in \mathcal {W}$. End-effector: The tool-tips or end-effectors are geometries, often attached at the end of a robotic arm, that can interact with objects in the environment. These form a manipulator's chief mode of picking and placing objects of interest and range from articulated fingers to suction cups. A subset of the workspace that the robot can reach with its end-effector is called the reachable workspace. The end-effector in this work is used as a pointing indicator. Pick-and-place: Given a target object in the workspace, a pick-and-place task requires the object to be picked up from its initial position and orientation, and placed at a final position and orientation. When a manipulator executes this task in its reachable workspace, it uses its end-effector. The rest of this work ignores the effect of the object's orientation by considering objects with sufficient symmetry. Given this simplification, the pick-and-place task can be viewed as a transition from an initial position $x_{\textit {init}}\in \mathcal {W}$ to a final placement position $x_{\textit {final}}\in \mathcal {W}$. Thus, a pick-and-place task can be specified with a tuple Pointing Action: Within its reachable workspace the end-effector of the manipulator can attain different orientations to fully specify a reachable pose $p$, which describes its position and orientation. The robots we study have a directional tooltip that viewers naturally see as projecting a ray $r$ along its axis outward into the scene. In understanding pointing as communication, the key question is the relationship between the ray $r$ and the spatial values $x_{\textit {init}}$ and $x_{\textit {final}}$ that define the pick-and-place task. To make this concrete, we distinguish between the target of pointing and the intent of pointing. Given the ray $r$ coming out of the end-effector geometry, we define the target of the pointing as the intersection of this ray on the stable surface, Meanwhile, the intent of pointing specifies one component of a pick-and-place task. There are two cases: Referential Pointing: The pointing action is intended to identify a target object $o$ to be picked up. This object is the referent of such an action. We can find $x_{\textit {init}}$, based on the present position of $o$. Locating Pointing: The pointing action is intended to identify the location in the workspace where the object needs to be placed, i.e, $x_{\textit {final}}$. We study effective ways to express intent for a pick-and-place task. In other words, what is the relationship between a pointing ray $r$ and the location $x_{\textit {init}}$ or $x_{\textit {final}}$ that it is intended to identify? To assess these relationships, we ask human observers to view animations expressing pick-and-place tasks and classify their interpretations. To understand the factors involved, we investigate a range of experimental conditions. <<</Communicating Pick-and-Place>>> <<<Experiments>>> Our experiments share a common animation platform, described in the Experimental Setup, and a common Data Collection protocol. The experiments differ in presenting subjects with a range of experimental conditions, as described in the corresponding section. All of the experiments described here together with the methods chosen to analyze the data were based on a private but approved pre-registration on aspredicted.org. The document is publicly available at: https://aspredicted.org/cg753.pdf. <<<Experiment Setup>>> Each animation shows a simulated robot producing two pointing gestures to specify a pick-and-place task. Following the animation, viewers are asked whether a specific image represents a possible result of the specified task. Robotic Platforms The experiments were performed on two different robotic geometries, based on a Rethink Baxter, and a Kuka IIWA14. The Baxter is a dual-arm manipulator with two arms mounted on either side of a static torso. The experiments only move the right arm of the Baxter. The Kuka consists of a single arm that is vertically mounted, i.e., points upward at the base. In the experiments the robots are shown with a singly fingered tool-tip, where the pointing ray is modeled as the direction of this tool-tip. Note The real Baxter robot possesses a heads-up display that can be likened to a `head'. This has been removed in the simulations that were used in this study (as shown for example in Figure FIGREF7). Workspace Setup Objects are placed in front of the manipulators. In certain trials a table is placed in front of the robot as well, and the objects rest in stable configurations on top of the table. A pick-and-place task is provided specified in terms of the positions of one of the objects. Objects The objects used in the study include small household items like mugs, saucers and boxes (cuboids), that are all placed in front of the robots. Motion Generation The end-effector of the manipulator is instructed to move to pre-specified waypoints, designed for the possibility of effective communication, that typically lie between the base of the manipulator and the object itself. Such waypoints fully specify both the position and orientation of the end-effector to satisfy pointing actions. The motions are performed by solving Inverse Kinematics for the end-effector geometry and moving the manipulator along these waypoints using a robotic motion planning library BIBREF23. The motions were replayed on the model of the robot, and rendered in Blender. Pointing Action Generation Potential pointing targets are placed using a cone $C(r, \theta )$, where $r$ represents the pointing ray and $\theta $ represents the vertex angle of the cone. As illustrated in Fig FIGREF2, the cone allows us to assess the possible divergence between the pointing ray and the actual location of potential target objects on the rest surface $S$. Given a pointing ray $r$, we assess the resolution of the pointing gesture by sampling $N$ object poses $p_i, i=1:N$ in $P=C(r, \theta ) \cap S$—the intersection of the pointing cone with the rest surface. While $p_i$ is the 6d pose for the object with translation $t \in R^3$ and orientation $R \in SO(3)$ only 2 degrees-of-freedom $(x, y)$ corresponding to $t$ are varied in the experiments. By fixing the $z$ coordinate for translation and restricting the z-axis of rotation to be perpendicular to $S$, it is ensured that the object rests in a physically stable configuration on the table. The $N$ object poses are sampled by fitting an ellipse within $P$ and dividing the ellipse into 4 quadrants $q_1\ldots q_4$ (See Figure FIGREF2 (C)). Within each quadrant $q_i$ the $N/4$ $(x,y)$ positions are sampled uniformly at random. For certain experiments additional samples are generated with an objective to increase coverage of samples within the ellipse by utilizing a dispersion measure. Speech Some experiments also included verbal cues with phrases like `Put that there' along with the pointing actions. It was very important for the pointing actions and these verbal cues to be in synchronization. To fulfill this we generate the voice using Amazon Polly with text written in SSML format and make sure that peak of the gesture (the moment a gesture comes to a stop) is in alignment with the peak of each audio phrase in the accompanying speech. During the generation of the video itself we took note of the peak moments of the gestures and then manipulated the duration between peaks of the audio using SSML to match them with gesture peaks after analyzing the audio with the open-source tool PRAAT (www.praat.org). <<</Experiment Setup>>> <<<Data Collection>>> Data collection was performed in Amazon Mechanical Turk. All subjects agreed to a consent form and were compensated at an estimated rate of USD 20 an hour. The subject-pool was restricted to non-colorblind US citizens. Subjects are presented a rendered video of the simulation where the robot performs one referential pointing action, and one locating pointing action which amounts to it pointing to an object, and then to a final location. During these executions synchronized speech is included in some of the trials to provide verbal cues. Then on the same page, subjects see the image that shows the result of the pointing action. They are asked whether the result is (a) correct, (b) incorrect, or (c) ambiguous. To test our hypothesis, we studied the interpretation of the two pointing behaviors in different contexts. Assuming our conjecture and a significance level of 0.05, a sample of 28 people in each condition is enough to detect our effect with a 95% power. Participants are asked to report judgments on the interpretation of the pointing action in each class. Each participant undertakes two trials from each class. The range of different cases are described below. Overall, the data collection in this study involved over 7,290 responses to robot pointing actions. <<</Data Collection>>> <<<Experimental Conditions>>> We used our experiment setup to generate videos and images from the simulation for a range of different conditions. <<<Referential vs Locating>>> In this condition, to reduce the chances of possible ambiguities, we place only one mug is on the table. The Baxter robot points its right arm to the mug and then points to its final position, accompanied by a synchronized verbal cue, “Put that there.” We keep the motion identical across all the trials in this method. We introduce a variability in the initial position of the mug by sampling 8 random positions within conic sections subtending $45^{\circ } , 67.5^{\circ }, $ and $90^{\circ }$ on the surface of the table. New videos are generated for each such position of the mug. This way we can measure how flexible subjects are to the variation of the initial location of the referent object. To test the effect for the locating pointing action, we test similarly sampled positions around the final pointed location, and display these realizations of the mug as the result images to subjects, while the initial position of the mug is kept perfectly situated. A red cube that is in the gesture space of the robot, and is about twice as big as the mug is placed on the other side of the table as a visual guide for the subjects to see how objects can be placed on the table. We remove the tablet that is attached to Baxter's head for our experiments. Effect of speech In order to test the effect of speech on the disparity between the kinds of pointing actions, a set of experiments were designed under the Referential vs Locating method with and without any speech. All subsequent methods will include verbal cues during their action execution. These cues are audible in the video. <<</Referential vs Locating>>> <<<Reverse Task>>> One set of experiments are run for the pick-and-place task with the initial and final positions of the object flipped during the reverse task. As opposed to the first set of experiments, the robot now begins by pointing to an object in the middle of the table, and then to an area areas towards the table's edge, i.e., the pick and place positions of the object are `reversed'. The trials are meant to measure the sensitivity of the subjects in pick trials to the direction of the pointing gestures and to the absolute locations that the subjects thought the robot was pointing at. This condition is designed to be identical to the basic Referential vs Locating study, except for the direction of the action. The motions are still executed on the Baxter's right arm. <<</Reverse Task>>> <<<Different Robotic Arm>>> In order to ensure that the results obtained in this study are not dependent on the choice of the robotic platform or its visual appearance, a second robot—a singly armed industrial Kuka manipulator—is also evaluated in a Referential vs Locating study (shown in Figure FIGREF6). <<</Different Robotic Arm>>> <<<Cluttered Scene>>> To study how the presence of other objects would change the behavior of referential pointing, we examine the interpretation of the pointing actions when there is more than one mug on the table. Given the instructions to the subjects, both objects are candidate targets. This experiment allows the investigation of the effect of a distractor object in the scene on referential pointing. We start with a setup where there are two mugs placed on the table (similar to the setup in Figure FIGREF14). One is a target mug placed at position $x_{\textit {object}}$ and a distractor mug at position $x_{\textit {distractor}}$. With the robot performing an initial pointing action to a position $x_{\textit {init}}$ on the table. Both the objects are sampled around $x_{\textit {init}}$ along the diametric line of the conic section arising from increasing cone angles of $45^\circ , 67.5^\circ , $ and $90^\circ $, where the separation of $x_{\textit {object}}$, and $x_{\textit {distractor}}$ is equal to the length of the diameter of the conic section, $D$. The objects are then positioned on the diametric line with a random offset between $[-\frac{D}{2}, \frac{D}{2}]$ around $x_{\textit {init}}$ and along the line. This means that the objects are at various distances apart, and depending upon the offset, one of the objects is nearer to the pointing action. The setup induces that the nearer mug serves as the object, and the farther one serves as the distractor. The motions are performed on the Baxter's right arm. The camera perspective in simulation is set to be facing into the pointing direction. The subjects in this trial are shown images of the instant of the referential pointing action. <<</Cluttered Scene>>> <<<Natural vs Unnatural scene>>> In this condition we study how the contextual and physical understanding of the world impacts the interpretation of pointing gestures. We generate a scenario for locating pointing in which the right arm of the Baxter points to a final placement position for the cuboidal object on top of a stack of cuboidal objects but towards the edge which makes it physically unstable. The final configurations of the object (Figure FIGREF17) shown to the users were a) object lying on top of the stack b) object in the unstable configuration towards the edge of the stack and c) object at the bottom of the stack towards one side. New videos are generated for each scenario along with verbal cues. The pointing action, as well as the objects of interest, stay the identical between the natural, and unnatural trials. The difference lies in other objects in the scene that could defy gravity and float in the unnatural trials. The subjects were given a text-based instruction at the beginning of an unnatural trial saying they were seeing a scene where “gravity does not exist.” <<</Natural vs Unnatural scene>>> <<<Different verbs>>> To test if the effect is specific to the verb put, we designed a control condition where everything remained the same as the Referential vs Locating trials except the verb put which we replaced with place, move and push. Here again we collect 30 data points for each sampled $x^*$. <<</Different verbs>>> <<</Experimental Conditions>>> <<</Experiments>>> <<<Analysis>>> <<<Natural vs Unnatural>>> As shown in Table TABREF21 we observed in the natural scene, when the end-effector points towards the edge of the cube that is on top of the stack, subjects place the new cube on top of the stack or on the table instead of the edge of the cube. However, in the unnatural scene, when we explain to subjects that there is no gravity, a majority agree with the final image that has the cube on the edge. To test if this difference is statistically significant, we use the Fisher exact test BIBREF25. The test statistic value is $0.0478$. The result is significant at $p < 0.05$. <<</Natural vs Unnatural>>> <<<Cluttered>>> The data from these trials show how human subjects select between the two candidate target objects on the table. Since the instructions do not serve to disambiguate the target mug, the collected data show what the observers deemed as the correct target. Figure FIGREF24 visualizes subjects' responses across trials. The location of each pie uses the $x$-axis to show how much closer one candidate object is to the pointing target than the other, and uses the $y$-axis to show the overall imprecision of pointing. Each pie in Figure FIGREF24 shows the fraction of responses across trials that recorded the nearer (green) mug as correct compared to the farther mug (red). The white shaded fractions of the pies show the fraction of responses where subjects found the gesture ambiguous. As we can see in Figure FIGREF24, once the two objects are roughly equidistant the cups from the center of pointing (within about 10cm), subjects tend to regard the pointing gesture as ambiguous, but as this distance increases, subjects are increasingly likely to prefer the closer target. In all cases, wherever subjects have a preference for one object over the other, they subjects picked the mug that was the nearer target of the pointing action more often than the further one. <<</Cluttered>>> <<</Analysis>>> <<<Human Evaluation of Instructions>>> After designing and conducting our experiments, we became concerned that subjects might regard imprecise referential pointing as understandable but unnatural. If they did, their judgments might combine ordinary interpretive reasoning with additional effort, self-consciousness or repair. We therefore added a separate evaluation to assess how natural the generated pointing actions and instructions are. We recruited 480 subjects from Mechanical Turk using the same protocol described in our Data Collection procedure, and asked them to rank how natural they regarded the instruction on a scale of 0 to 5. The examples were randomly sampled from the videos of the referential pointing trials that we showed to subjects for both the Baxter and Kuka robots. These examples were selected in a way that we obtained equal number of samples from each cone. The average rating for samples from the 45, ${67.5}$ and 90 cone are $3.625, 3.521$ and $3.650$ respectively. For Kuka, the average rating for samples from the 45, ${67.5}$ and 90 cone are $3.450, 3.375$, and $3.400$. Overall, the average for Baxter is $3.600$, and for Kuka is $3.408$. The differences between Kuka and Baxter and the differences across cones are not statistically significant ($t \le |1.07|, p > 0.1 $). Thus we have no evidence that subjects regard imprecise pointing as problematic. <<</Human Evaluation of Instructions>>> <<<Design Principles>>> The results of the experiments suggest that locating pointing is interpreted rather precisely, where referential pointing is interpreted relatively flexibly. This naturally aligns with the possibility for alternative interpretations. For spatial reference, any location is a potential target. By contrast, for referential pointing, it suffices to distinguish the target object from its distractors. We can characterize this interpretive process in formal terms by drawing on observations from the philosophical and computational literature on vagueness BIBREF26, BIBREF27, BIBREF28. Any pointing gesture starts from a set of candidate interpretations $D \subset \mathcal {W}$ determined by the context and the communicative goal. In unconstrained situations, locating pointing allows a full set of candidates $D = \mathcal {W}.$ If factors like common-sense physics impose task constraints, that translates to restrictions on feasible targets $CS$, leading to a more restricted set of candidates $D = CS \cap \mathcal {W}$. Finally, for referential pointing, the potential targets are located at $x_1 \ldots x_N \in S$, and $D = \lbrace x_1 \ldots x_N \rbrace .$ Based on the communicative setting, we know that the pointing gesture, like any vague referring expression, must select at least one of the possible interpretations BIBREF28. We can find the best interpretation by its distance to the target $x^*$ of the pointing gesture. Using $d(x,x^*)$ to denote this distance, gives us a threshold Vague descriptions can't be sensitive to fine distinctions BIBREF27. So if a referent at $\theta $ is close enough to the pointing target, then another at $\theta + \epsilon $ must be close enough as well, for any value of $\epsilon $ that is not significant in the conversational context. Our results suggest that viewers regard 10cm (in the scale of the model simulation) as an approximate threshold for a significant difference in our experiments. In all, we predict that a pointing gesture is interpreted as referring to $\lbrace x \in D | d(x,x^*) \le \theta + \epsilon \rbrace .$ We explain the different interpretations through the different choice of $D$. <<<Locating Pointing>>> For unconstrained locating pointing, $x^* \in D$, so $\theta =0$. That means, the intended placement cannot differ significantly from the pointing target. Taking into account common sense, we allow for small divergence that connects the pointing, for example, to the closest stable placement. <<</Locating Pointing>>> <<<Referential Pointing>>> For referential pointing, candidates play a much stronger role. A pointing gesture always has the closest object to the pointing target as a possible referent. However, ambiguities arise when the geometries of more than one object intersect with the $\theta +\epsilon $-neighborhood of $x^*$. We can think of that, intuitively, in terms of the effects of $\theta $ and $\epsilon $. Alternative referents give rise to ambiguity not only when they are too close to the target location ($\theta $) but even when they are simply not significantly further away from the target location ($\epsilon $). <<</Referential Pointing>>> <<</Design Principles>>> <<<Conclusion and Future Work>>> We have presented an empirical study of the interpretation of simulated robots instructing pick-and-place tasks. Our results show that robots can effectively combine pointing gestures and spoken instructions to communicate both object and spatial information. We offer an empirical characterization—the first, to the best of the authors' knowledge—of the use of robot gestures to communicate precise spatial locations for placement purposes. We have suggested that pointing, in line with other vague references, give rise to a set of candidate interpretations that depend on the task, context and communicative goal. Users pick the interpretations that are not significantly further from the pointing ray than the best ones. This contrasts with previous models that required pointing gestures to target a referent exactly or fall within a context-independent pointing cone. Our work has a number of limitations that suggest avenues for future work. It remains to implement the design principles on robot hardware, explore the algorithmic process for generating imprecise but interpretable gestures, and verify the interpretations of physically co-present viewers. Note that we used a 2D interface, which can introduce artifacts, for example from the effect of perspective. In addition, robots can in general trade off pointing gestures with other descriptive material in offering instructions. Future work is needed to assess how such trade-offs play out in location reference, not just in object reference. More tight-knit collaborative scenarios need to be explored, including ones where multiple pick-and-place tasks can be composed to communicate more complex challenges and ones where they involve richer human environments. Our study of common sense settings opens up intriguing avenues for such research, since it suggests ways to take into account background knowledge and expectations to narrow down the domain of possible problem specifications in composite tasks like “setting up a dining table.” While the current work studies the modalities of pointing and verbal cues, effects of including additional robotic communication in the form of heads-up displays or simulated eye-gaze would be other directions to explore. Such extensions would require lab experiments with human subjects and a real robot. This is the natural next step of our work. <<</Conclusion and Future Work>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nRelated work\nCommunicating Pick-and-Place\nExperiments\nExperiment Setup\nData Collection\nExperimental Conditions\nReferential vs Locating\nReverse Task\nDifferent Robotic Arm\nCluttered Scene\nNatural vs Unnatural scene\nDifferent verbs\nAnalysis\nNatural vs Unnatural\nCluttered\nHuman Evaluation of Instructions\nDesign Principles\nLocating Pointing\nReferential Pointing\nConclusion and Future Work" ], "type": "outline" }
2002.01030
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Detecting Fake News with Capsule Neural Networks <<<Abstract>>> Fake news is dramatically increased in social media in recent years. This has prompted the need for effective fake news detection algorithms. Capsule neural networks have been successful in computer vision and are receiving attention for use in Natural Language Processing (NLP). This paper aims to use capsule neural networks in the fake news detection task. We use different embedding models for news items of different lengths. Static word embedding is used for short news items, whereas non-static word embeddings that allow incremental up-training and updating in the training phase are used for medium length or large news statements. Moreover, we apply different levels of n-grams for feature extraction. Our proposed architectures are evaluated on two recent well-known datasets in the field, namely ISOT and LIAR. The results show encouraging performance, outperforming the state-of-the-art methods by 7.8% on ISOT and 3.1% on the validation set, and 1% on the test set of the LIAR dataset. <<</Abstract>>> <<<Introduction>>> Flexibility and ease of access to social media have resulted in the use of online channels for news access by a great number of people. For example, nearly two-thirds of American adults have access to news by online channels BIBREF0, BIBREF1. BIBREF2 also reported that social media and news consumption is significantly increased in Great Britain. In comparison to traditional media, social networks have proved to be more beneficial, especially during a crisis, because of the ability to spread breaking news much faster BIBREF3. All of the news, however, is not real and there is a possibility of changing and manipulating real information by people due to political, economic, or social motivations. This manipulated data leads to the creation of news that may not be completely true or may not be completely false BIBREF4. Therefore, there is misleading information on social media that has the potential to cause many problems in society. Such misinformation, called fake news, has a wide variety of types and formats. Fake advertisements, false political statements, satires, and rumors are examples of fake news BIBREF0. This widespread of fake news that is even more than mainstream media BIBREF5 motivated many researchers and practitioners to focus on presenting effective automatic frameworks for detecting fake news BIBREF6. Google has announced an online service called “Google News Initiative” to fight fake news BIBREF7. This project will try to help readers for realizing fake news and reports BIBREF8. Detecting fake news is a challenging task. A fake news detection model tries to predict intentionally misleading news based on analyzing the real and fake news that previously reviewed. Therefore, the availability of high-quality and large-size training data is an important issue. The task of fake news detection can be a simple binary classification or, in a challenging setting, can be a fine-grained classification BIBREF9. After 2017, when fake news datasets were introduced, researchers tried to increase the performance of their models using this data. Kaggle dataset, ISOT dataset, and LIAR dataset are some of the most well-known publicly available datasets BIBREF10. In this paper, we propose a new model based on capsule neural networks for detecting fake news. We propose architectures for detecting fake news in different lengths of news statements by using different varieties of word embedding and applying different levels of n-gram as feature extractors. We show these proposed models achieve better results in comparison to the state-of-the-art methods. The rest of the paper is organized as follows: Section SECREF2 reviews related work about fake news detection. Section SECREF3 presents the model proposed in this paper. The datasets used for fake news detection and evaluation metrics are introduced in Section SECREF4. Section SECREF5 reports the experimental results, comparison with the baseline classification and discussion. Section SECREF6 summarizes the paper and concludes this work. <<</Introduction>>> <<<Related work>>> Fake news detection has been studied in several investigations. BIBREF11 presented an overview of deception assessment approaches, including the major classes and the final goals of these approaches. They also investigated the problem using two approaches: (1) linguistic methods, in which the related language patterns were extracted and precisely analyzed from the news content for making decision about it, and (2) network approaches, in which the network parameters such as network queries and message metadata were deployed for decision making about new incoming news. BIBREF12 proposed an automated fake news detector, called CSI that consists of three modules: Capture, Score, and Integrate, which predicts by taking advantage of three features related to the incoming news: text, response, and source of it. The model includes three modules; the first one extracts the temporal representation of news articles, the second one represents and scores the behavior of the users, and the last module uses the outputs of the first two modules (i.e., the extracted representations of both users and articles) and use them for the classification. Their experiments demonstrated that CSI provides an improvement in terms of accuracy. BIBREF13 introduced a new approach which tries to decide if a news is fake or not based on the users that interacted with and/or liked it. They proposed two classification methods. The first method deploys a logistic regression model and takes the user interaction into account as the features. The second one is a novel adaptation of the Boolean label crowdsourcing techniques. The experiments showed that both approaches achieved high accuracy and proved that considering the users who interact with the news is an important feature for making a decision about that news. BIBREF14 introduced two new datasets that are related to seven different domains, and instead of short statements containing fake news information, their datasets contain actual news excerpts. They deployed a linear support vector machine classifier and showed that linguistic features such as lexical, syntactic, and semantic level features are beneficial to distinguish between fake and genuine news. The results showed that the performance of the developed system is comparable to that of humans in this area. BIBREF15 provided a novel dataset, called LIAR, consisting of 12,836 labeled short statements. The instances in this dataset are chosen from more natural contexts such as Facebook posts, tweets, political debates, etc. They proposed neural network architecture for taking advantage of text and meta-data together. The model consists of a Convolutional Neural Network (CNN) for feature extraction from the text and a Bi-directional Long Short Term Memory (BiLSTM) network for feature extraction from the meta-data and feeds the concatenation of these two features into a fully connected softmax layer for making the final decision about the related news. They showed that the combination of metadata with text leads to significant improvements in terms of accuracy. BIBREF16 proved that incorporating speaker profiles into an attention-based LSTM model can improve the performance of a fake news detector. They claim speaker profiles can contribute to the model in two different ways. First, including them in the attention model. Second, considering them as additional input data. They used party affiliation, speaker location, title, and credit history as speaker profiles, and they show this metadata can increase the accuracy of the classifier on the LIAR dataset. BIBREF17 presented a new dataset for fake news detection, called ISOT. This dataset was entirely collected from real-world sources. They used n-gram models and six machine learning techniques for fake news detection on the ISOT dataset. They achieved the best performance by using TF-IDF as the feature extractor and linear support vector machine as the classifier. BIBREF18 proposed an end-to-end framework called event adversarial neural network, which is able to extract event-invariant multi-modal features. This model has three main components: the multi-modal feature extractor, the fake news detector, and the event discriminator. The first component uses CNN as its core module. For the second component, a fully connected layer with softmax activation is deployed to predict if the news is fake or not. As the last component, two fully connected layers are used, which aims at classifying the news into one of K events based on the first component representations. BIBREF19 developed a tractable Bayesian algorithm called Detective, which provides a balance between selecting news that directly maximizes the objective value and selecting news that aids toward learning user's flagging accuracy. They claim the primary goal of their works is to minimize the spread of false information and to reduce the number of users who have seen the fake news before it becomes blocked. Their experiments show that Detective is very competitive against the fictitious algorithm OPT, an algorithm that knows the true users’ parameters, and is robust in applying flags even in a setting where the majority of users are adversarial. <<</Related work>>> <<<Capsule networks for fake news detection>>> In this section, we first introduce different variations of word embedding models. Then, we proposed two capsule neural network models according to the length of the news statements that incorporate different word embedding models for fake news detection. <<<Different variations of word embedding models>>> Dense word representation can capture syntactic or semantic information from words. When word representations are demonstrated in low dimensional space, they are called word embedding. In these representations, words with similar meanings are in close position in the vector space. In 2013, BIBREF20 proposed word2vec, which is a group of highly efficient computational models for learning word embeddings from raw text. These models are created by training neural networks with two-layers trained by a large volume of text. These models can produce vector representations for every word with several hundred dimensions in a vector space. In this space, words with similar meanings are mapped to close coordinates. There are some pre-trained word2vec vectors like 'Google News' that was trained on 100 billion words from Google news. One of the popular methods to improve text processing performance is using these pre-trained vectors for initializing word vectors, especially in the absence of a large supervised training set. These distributed vectors can be fed into deep neural networks and used for any text classification task BIBREF21. These pre-trained embeddings, however, can further be enhanced. BIBREF21 applied different learning settings for vector representation of words via word2vec for the first time and showed their superiority compared to the regular pre-trained embeddings when they are used within a CNN model. These settings are as follow: Static word2vec model: in this model, pre-trained vectors are used as input to the neural network architecture, these vectors are kept static during training, and only the other parameters are learned. Non-static word2vec model: this model uses the pre-trained vectors at the initialization of learning, but during the training phase, these vectors are fine-tuned for each task using the training data of the target task. Multichannel word2vec model: the model uses two sets of static and non-static word2vec vectors, and a part of vectors fine-tune during training. <<</Different variations of word embedding models>>> <<<Proposed model>>> Although different models based on deep neural networks have been proposed for fake news detection, there is still a great need for further improvements in this task. In the current research, we aim at using capsule neural networks to enhance the accuracy of fake news identification systems. The capsule neural network was introduced by BIBREF22 for the first time in the paper called “Dynamic Routing Between Capsules”. In this paper, they showed that capsule network performance for MNIST dataset on highly overlapping digits could work better than CNNs. In computer vision, a capsule network is a neural network that tries to work inverse graphics. In a sense, the approach tries to reverse-engineer the physical process that produces an image of the world BIBREF23. The capsule network is composed of many capsules that act like a function, and try to predict the instantiation parameters and presence of a particular object at a given location. One key feature of capsule networks is equivariance, which aims at keeping detailed information about the location of the object and its pose throughout the network. For example, if someone rotates the image slightly, the activation vectors also change slightly BIBREF24. One of the limitations of a regular CNN is losing the precise location and pose of the objects in an image. Although this is not a challenging issue when classifying the whole image, it can be a bottleneck for image segmentation or object detection that needs precise location and pose. A capsule, however, can overcome this shortcoming in such applications BIBREF24. Capsule networks have recently received significant attention. This model aims at improving CNNs and RNNs by adding the following capabilities to each source, and target node: (1) the source node has the capability of deciding about the number of messages to transfer to target nodes, and (2) the target node has the capability of deciding about the number of messages that may be received from different source nodes BIBREF25. After the success of capsule networks in computer vision tasks BIBREF26, BIBREF27, BIBREF28, capsule networks have been used in different NLP tasks, including text classification BIBREF29, BIBREF30, multi-label text classification BIBREF31, sentiment analysis BIBREF18, BIBREF32, identifying aggression and toxicity in comments BIBREF33, and zero-shot user intent detection BIBREF34. In capsule networks, the features that are extracted from the text are encapsulated into capsules (groups of neurons). The first work that applied capsule networks for text classification was done by BIBREF35. In their research, the performance of the capsule network as a text classification network was evaluated for the first time. Their capsule network architecture includes a standard convolutional layer called n-gram convolutional layer that works as a feature extractor. The second layer is a layer that maps scalar-valued features into a capsule representation and is called the primary capsule layer. The outputs of these capsules are fed to a convolutional capsule layer. In this layer, each capsule is only connected to a local region in the layer below. In the last step, the output of the previous layer is flattened and fed through a feed-forward capsule layer. For this layer, every capsule of the output is considered as a particular class. In this architecture, a max-margin loss is used for training the model. Figure FIGREF6 shows the architecture proposed by BIBREF35. Some characteristics of capsules make them suitable for presenting a sentence or document as a vector for text classification. These characteristics include representing attributes of partial entities and expressing semantic meaning in a wide space BIBREF29. For fake news identification with different length of statements, our model benefits from several parallel capsule networks and uses average pooling in the last stage. With this architecture, the models can learn more meaningful and extensive text representations on different n-gram levels according to the length of texts. Depending on the length of the news statements, we use two different architectures. Figure FIGREF7 depicts the structure of the proposed model for medium or long news statements. In the model, a non-static word embedding is used as an embedding layer. In this layer, we use 'glove.6B.300d' as a pre-trained word embedding, and use four parallel networks by considering four different filter sizes 2,3,4,5 as n-gram convolutional layers for feature extraction. In the next layers, for each parallel network, there is a primary capsule layer and a convolutional capsule layer, respectively, as presented in Figure FIGREF6. A fully connected capsule layer is used in the last layer for each parallel network. At the end, the average polling is added for producing the final result. For short news statements, due to the limitation of word sequences, a different structure has been proposed. The layers are like the first model, but only two parallel networks are considered with 3 and 5 filter sizes. In this model, a static word embedding is used. Figure FIGREF8 shows the structure of the proposed model for short news statements. <<</Proposed model>>> <<</Capsule networks for fake news detection>>> <<<Evaluation>>> <<<Dataset>>> Several datasets have been introduced for fake news detection. One of the main requirements for using neural architectures is having a large dataset to train the model. In this paper, we use two datasets, namely ISOT fake news BIBREF17 and LIAR BIBREF15, which have a large number of documents for training deep models. The length of news statements for ISOT is medium or long, and LIAR is short. <<<The ISOT fake news dataset>>> In 2017, BIBREF17 introduced a new dataset that was collected from real-world sources. This dataset consists of news articles from Reuters.com and Kaggle.com for real news and fake news, respectively. Every instance in the dataset is longer than 200 characters. For each article, the following metadata is available: article type, article text, article title, article date, and article label (fake or real). Table TABREF12 shows the type and size of the articles for the real and fake categories. <<</The ISOT fake news dataset>>> <<<The LIAR dataset>>> As mentioned in Section SECREF2, one of the recent well-known datasets, is provided by BIBREF15. BIBREF15 introduced a new large dataset called LIAR, which includes 12.8K human-labeled short statements from POLITIFACT.COM API. Each statement is evaluated by POLITIFACT.COM editor for its validity. Six fine-grained labels are considered for the degree of truthfulness, including pants-fire, false, barely-true, half-true, mostly-true, and true. The distribution of labels in this dataset are as follows: 1,050 pants-fire labels and a range of 2,063 to 2,638 for other labels. In addition to news statements, this dataset consists of several metadata as speaker profiles for each news item. These metadata include valuable information about the subject, speaker, job, state, party, and total credit history count of the speaker of the news. The total credit history count, including the barely-true counts, false counts, half-true counts, mostly-true counts, and pants-fire counts. The statistics of LIAR dataset are shown in Table TABREF14. Some excerpt samples from the LIAR dataset are presented in Table TABREF15. <<</The LIAR dataset>>> <<</Dataset>>> <<<Experimental setup>>> The experiments of this paper were conducted on a PC with Intel Core i7 6700k, 3.40GHz CPU; 16GB RAM; Nvidia GeForce GTX 1080Ti GPU in a Linux workstation. For implementing the proposed model, the Keras library BIBREF36 was used, which is a high-level neural network API. <<</Experimental setup>>> <<<Evaluation metrics>>> The evaluation metric in our experiments is the classification accuracy. Accuracy is the ratio of correct predictions to the total number of samples and is computed as: Where TP is represents the number of True Positive results, FP represents the number of False Positive results, TN represents the number of True Negative results, and FN represents the number of False Negative results. <<</Evaluation metrics>>> <<</Evaluation>>> <<<Results>>> For evaluating the effectiveness of the proposed model, a series of experiments on two datasets were performed. These experiments are explained in this section and the results are compared to other baseline methods. We also discuss the results for every dataset separately. <<<Classification for ISOT dataset>>> As mentioned in Section SECREF4, BIBREF17 presented the ISOT dataset. According to the baseline paper, we consider 1000 articles for every set of real and fake articles, a total of 2000 articles for the test set, and the model is trained with the rest of the data. First, the proposed model is evaluated with different word embeddings that described in Section SECREF1. Table TABREF20 shows the result of applying different word embeddings for the proposed model on ISOT, which consists of medium and long length news statements. The best result is achieved by applying the non-static embedding. BIBREF17 evaluated different machine learning methods for fake news detection on the ISOT dataset, including the Support Vector Machine (SVM), the Linear Support Vector Machine (LSVM), the K-Nearest Neighbor (KNN), the Decision Tree (DT), the Stochastic Gradient Descent (SGD), and the Logistic regression (LR) methods. Table TABREF21 shows the performance of non-static capsule network for fake news detection in comparison to other methods. The accuracy of our model is 7.8% higher than the best result achieved by LSVM. <<</Classification for ISOT dataset>>> <<<Discussion>>> The proposed model can predict true labels with high accuracy reaching in a very small number of wrong predictions. Table TABREF23 shows the titles of two wrongly predicted samples for detecting fake news. To have an analysis on our results, we investigate the effects of sample words that are represented in training statements that tagged as real and fake separately. For this work, all of the words and their frequencies are extracted from the two wrong samples and both real and fake labels of the training data. Table TABREF24 shows the information of this data. Then for every wrongly predicted sample, stop-words are omitted, and words with a frequency of more than two are listed. After that, all of these words and their frequency in real and fake training datasets are extracted. In this part, the frequencies of these words are normalized. Table TABREF25 and Table TABREF28 show the normalized frequencies of words for each sample respectably. In these tables, for ease of comparison, the normalized frequencies of real and fake labels of training data and the normalized frequency for each word in every wrong sample are multiplied by 10. The label of Sample 1 is predicted as fake, but it is real. In Table TABREF25, six most frequent words of Sample 1 are listed, the word "tax" is presented 2 times more than each of the other words in Sample 1, and this word in the training data with real labels is obviously more frequent. In addition to this word, for other words like "state", the same observation exists. The text of Sample 2 is predicted as real news, but it is fake. Table TABREF28 lists six frequent words of Sample 2. The two most frequent words of this text are "trump" and "sanders". These words are more frequent in training data with fake labels than the training data with real labels. "All" and "even" are two other frequent words, We use "even" to refer to something surprising, unexpected, unusual or extreme and "all" means every one, the complete number or amount or the whole. therefore, a text that includes these words has more potential to classify as a fake news. These experiments show the strong effect of the sample words frequency on the prediction of the labels. <<</Discussion>>> <<<Classification for the LIAR dataset>>> As mentioned in Section SECREF13, the LIAR dataset is a multi-label dataset with short news statements. In comparison to the ISOT dataset, the classification task for this dataset is more challenging. We evaluate the proposed model while using different metadata, which is considered as speaker profiles. Table TABREF30 shows the performance of the capsule network for fake news detection by adding every metadata. The best result of the model is achieved by using history as metadata. The results show that this model can perform better than state-of-the-art baselines including hybrid CNN BIBREF15 and LSTM with attention BIBREF16 by 3.1% on the validation set and 1% on the test set. <<</Classification for the LIAR dataset>>> <<</Results>>> <<<Conclusion>>> In this paper, we apply capsule networks for fake news detection. We propose two architectures for different lengths of news statements. We apply two strategies to improve the performance of the capsule networks for the task. First, for detecting the medium or long length of news text, we use four parallel capsule networks that each one extracts different n-gram features (2,3,4,5) from the input texts. Second, we use non-static embedding such that the word embedding model is incrementally up-trained and updated in the training phase. Moreover, as a fake news detector for short news statements, we use only two parallel networks with 3 and 5 filter sizes as a feature extractor and static model for word embedding. For evaluation, two datasets are used. The ISOT dataset as a medium length or long news text and LIAR as a short statement text. The experimental results on these two well-known datasets showed improvement in terms of accuracy by 7.8% on the ISOT dataset and 3.1% on the validation set and 1% on the test set of the LIAR dataset. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nRelated work\nCapsule networks for fake news detection\nDifferent variations of word embedding models\nProposed model\nEvaluation\nDataset\nThe ISOT fake news dataset\nThe LIAR dataset\nExperimental setup\nEvaluation metrics\nResults\nClassification for ISOT dataset\nDiscussion\nClassification for the LIAR dataset\nConclusion" ], "type": "outline" }
2004.03788
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Satirical News Detection with Semantic Feature Extraction and Game-theoretic Rough Sets <<<Abstract>>> Satirical news detection is an important yet challenging task to prevent spread of misinformation. Many feature based and end-to-end neural nets based satirical news detection systems have been proposed and delivered promising results. Existing approaches explore comprehensive word features from satirical news articles, but lack semantic metrics using word vectors for tweet form satirical news. Moreover, the vagueness of satire and news parody determines that a news tweet can hardly be classified with a binary decision, that is, satirical or legitimate. To address these issues, we collect satirical and legitimate news tweets, and propose a semantic feature based approach. Features are extracted by exploring inconsistencies in phrases, entities, and between main and relative clauses. We apply game-theoretic rough set model to detect satirical news, in which probabilistic thresholds are derived by game equilibrium and repetition learning mechanism. Experimental results on the collected dataset show the robustness and improvement of the proposed approach compared with Pawlak rough set model and SVM. <<</Abstract>>> <<<Introduction>>> Satirical news, which uses parody characterized in a conventional news style, has now become an entertainment on social media. While news satire is claimed to be pure comedic and of amusement, it makes statements on real events often with the aim of attaining social criticism and influencing change BIBREF0. Satirical news can also be misleading to readers, even though it is not designed for falsifications. Given such sophistication, satirical news detection is a necessary yet challenging natural language processing (NLP) task. Many feature based fake or satirical news detection systems BIBREF1, BIBREF2, BIBREF3 extract features from word relations given by statistics or lexical database, and other linguistic features. In addition, with the great success of deep learning in NLP in recent years, many end-to-end neural nets based detection systems BIBREF4, BIBREF5, BIBREF6 have been proposed and delivered promising results on satirical news article detection. However, with the evolution of fast-paced social media, satirical news has been condensed into a satirical-news-in-one-sentence form. For example, one single tweet of “If earth continues to warm at current rate moon will be mostly underwater by 2400" by The Onion is largely consumed and spread by social media users than the corresponding full article posted on The Onion website. Existing detection systems trained on full document data might not be applicable to such form of satirical news. Therefore, we collect news tweets from satirical news sources such as The Onion, The New Yorker (Borowitz Report) and legitimate news sources such as Wall Street Journal and CNN Breaking News. We explore the syntactic tree of the sentence and extract inconsistencies between attributes and head noun in noun phrases. We also detect the existence of named entities and relations between named entities and noun phrases as well as contradictions between the main clause and corresponding prepositional phrase. For a satirical news, such inconsistencies often exist since satirical news usually combines irrelevant components so as to attain surprise and humor. The discrepancies are measured by cosine similarity between word components where words are represented by Glove BIBREF7. Sentence structures are derived by Flair, a state-of-the-art NLP framework, which better captures part-of-speech and named entity structures BIBREF8. Due to the obscurity of satire genre and lacks of information given tweet form satirical news, there exists ambiguity in satirical news, which causes great difficulty to make a traditional binary decision. That is, it is difficult to classify one news as satirical or legitimate with available information. Three-way decisions, proposed by YY Yao, added an option - deferral decision in the traditional yes-and-no binary decisions and can be used to classify satirical news BIBREF9, BIBREF10. That is, one news may be classified as satirical, legitimate, and deferral. We apply rough sets model, particularly the game-theoretic rough sets to classify news into three groups, i.e., satirical, legitimate, and deferral. Game-theoretic rough set (GTRS) model, proposed by JT Yao and Herbert, is a recent promising model for decision making in the rough set context BIBREF11. GTRS determine three decision regions from a tradeoff perspective when multiple criteria are involved to evaluate the classification models BIBREF12. Games are formulated to obtain a tradeoff between involved criteria. The balanced thresholds of three decision regions can be induced from the game equilibria. GTRS have been applied in recommendation systems BIBREF13, medical decision making BIBREF14, uncertainty analysis BIBREF15, and spam filtering BIBREF16. We apply GTRS model on our preprocessed dataset and divide all news into satirical, legitimate, or deferral regions. The probabilistic thresholds that determine three decision regions are obtained by formulating competitive games between accuracy and coverage and then finding Nash equilibrium of games. We perform extensive experiments on the collected dataset, fine-tuning the model by different discretization methods and variation of equivalent classes. The experimental result shows that the performance of the proposed model is superior compared with Pawlak rough sets model and SVM. <<</Introduction>>> <<<Related Work>>> Satirical news detection is an important yet challenging NLP task. Many feature based models have been proposed. Burfoot et al. extracted features of headline, profanity, and slang using word relations given by statistical metrics and lexical database BIBREF1. Rubin et al. proposed a SVM based model with five features (absurdity, humor, grammar, negative affect, and punctuation) for fake news document detection BIBREF2. Yang et al. presented linguistic features such as psycholinguistic feature based on dictionary and writing stylistic feature from part-of-speech tags distribution frequency BIBREF17. Shu et al. gave a survey in which a set of feature extraction methods is introduced for fake news on social media BIBREF3. Conroy et al. also uses social network behavior to detect fake news BIBREF18. For satirical sentence classification, Davidov et al. extract patterns using word frequency and punctuation features for tweet sentences and amazon comments BIBREF19. The detection of a certain type of sarcasm which contracts positive sentiment with a negative situation by analyzing the sentence pattern with a bootstrapped learning was also discussed BIBREF20. Although word level statistical features are widely used, with advanced word representations and state-of-the-art part-of-speech tagging and named entity recognition model, we observe that semantic features are more important than word level statistical features to model performance. Thus, we decompose the syntactic tree and use word vectors to more precisely capture the semantic inconsistencies in different structural parts of a satirical news tweet. Recently, with the success of deep learning in NLP, many researchers attempted to detect fake news with end-to-end neural nets based approaches. Ruchansky et al. proposed a hybrid deep neural model which processes both text and user information BIBREF5, while Wang et al. proposed a neural network model that takes both text and image data BIBREF6 for detection. Sarkar et al. presented a neural network with attention to both capture sentence level and document level satire BIBREF4. Some research analyzed sarcasm from non-news text. Ghosh and Veale BIBREF21 used both the linguistic context and the psychological context information with a bi-directional LSTM to detect sarcasm in users' tweets. They also published a feedback-based dataset by collecting the responses from the tweets authors for future analysis. While all these works detect fake news given full text or image content, or target on non-news tweets, we attempt bridge the gap and detect satirical news by analyzing news tweets which concisely summarize the content of news. <<</Related Work>>> <<<Methodology>>> In this section, we will describe the composition and preprocessing of our dataset and introduce our model in detail. We create our dataset by collecting legitimate and satirical news tweets from different news source accounts. Our model aims to detect whether the content of a news tweet is satirical or legitimate. We first extract the semantic features based on inconsistencies in different structural parts of the tweet sentences, and then use these features to train game-theoretic rough set decision model. <<<Dataset>>> We collected approximately 9,000 news tweets from satirical news sources such as The Onion and Borowitz Report and about 11,000 news tweets from legitimate new sources such as Wall Street Journal and CNN Breaking News over the past three years. Each tweet is a concise summary of a news article. The duplicated and extreme short tweets are removed.A news tweet is labeled as satirical if it is written by satirical news sources and legitimate if it is from legitimate news sources. Table TABREF2 gives an example of tweet instances that comprise our dataset. <<</Dataset>>> <<<Semantic Feature Extraction>>> Satirical news is not based on or does not aim to state the fact. Rather, it uses parody or humor to make statement, criticisms, or just amusements. In order to achieve such effect, contradictions are greatly utilized. Therefore, inconsistencies significantly exist in different parts of a satirical news tweet. In addition, there is a lack of entity or inconsistency between entities in news satire. We extracted these features at semantic level from different sub-structures of the news tweet. Different structural parts of the sentence are derived by part-of-speech tagging and named entity recognition by Flair. The inconsistencies in different structures are measured by cosine similarity of word phrases where words are represented by Glove word vectors. We explored three different aspects of inconsistency and designed metrics for their measurements. A word level feature using tf-idf BIBREF22 is added for robustness. <<<Inconsistency in Noun Phrase Structures>>> One way for a news satire to obtain surprise or humor effect is to combine irrelevant or less jointly used attributes and the head noun which they modified. For example, noun phrase such as “rampant accountability", “posthumous apology", “Vatican basement", “self-imposed mental construct" and other rare combinations are widely used in satirical news, while individual words themselves are common. To measure such inconsistency, we first select all leaf noun phrases (NP) extracted from the semantic trees to avoid repeated calculation. Then for each noun phrase, each adjacent word pair is selected and represented by 100-dim Glove word vector denoted as $(v_{t},w_{t})$. We define the averaged cosine similarity of noun phrase word pairs as: where $T$ is a total number of word pairs. We use $S_{N\!P}$ as a feature to capture the overall inconsistency in noun phrase uses. $S_{N\!P}$ ranges from -1 to 1, where a smaller value indicates more significant inconsistency. <<</Inconsistency in Noun Phrase Structures>>> <<<Inconsistency Between Clauses>>> Another commonly used rhetoric approach for news satire is to make contradiction between the main clause and its prepositional phrase or relative clause. For instance, in the tweet “Trump boys counter Chinese currency manipulation $by$ adding extra zeros To $20 Bills.", contradiction or surprise is gained by contrasting irrelevant statements provided by different parts of the sentence. Let $q$ and $p$ denote two clauses separated by main/relative relation or preposition, and $(w_{1},w_{1},... w_{q})$ and $(v_{1},v_{1},... v_{p})$ be the vectorized words in $q$ and $p$. Then we define inconsistency between $q$ and $p$ as: Similarly, the feature $S_{Q\!P}$ is measured by cosine similarity of linear summations of word vectors, where smaller value indicates more significant inconsistency. <<</Inconsistency Between Clauses>>> <<<Inconsistency Between Named Entities and Noun Phrases>>> Even though many satirical news tweets are made based on real persons or events, most of them lack specific entities. Rather, because the news is fabricated, news writers use the words such as “man",“woman",“local man", “area woman",“local family" as subject. However, the inconsistency between named entities and noun phrases often exists in a news satire if a named entity is included. For example, the named entity “Andrew Yang" and the noun phrases “time vortex" show great inconsistency than “President Trump", "Senate Republicans", and “White House" do in the legitimate news “President Trump invites Senate Republicans to the White House to talk about the funding bill." We define such inconsistency as a categorical feature that: $S_{N\! E\! R\! N}$ is the cosine similarity of named entities and noun phrases of a certain sentence and $\bar{S}_{N\! E\! R\! N}$ is the mean value of $S_{N\! E\! R\! N}$ in corpus. <<</Inconsistency Between Named Entities and Noun Phrases>>> <<<Word Level Feature Using TF-IDF>>> We calculated the difference of tf-idf scores between legitimate news corpus and satirical news corpus for each single word. Then, the set $S_{voc}$ that includes most representative legitimate news words is created by selecting top 100 words given the tf-idf difference. For a news tweet and any word $w$ in the tweet, we define the binary feature $B_{voc}$ as: <<</Word Level Feature Using TF-IDF>>> <<</Semantic Feature Extraction>>> <<<GTRS Decision Model>>> We construct a Game-theoretic Rough Sets model for classification given the extracted features. Suppose $E\subseteq U \times U$ is an equivalence relation on a finite nonempty universe of objects $U$, where $E$ is reflexive, symmetric, and transitive. The equivalence class containing an object $x$ is given by $[x]=\lbrace y\in U|xEy\rbrace $. The objects in one equivalence class all have the same attribute values. In the satirical news context, given an undefined concept $satire$, probabilistic rough sets divide all news into three pairwise disjoint groups i.e., the satirical group $POS(satire)$, legitimate group $NEG(satire)$, and deferral group $BND(satire)$, by using the conditional probability $Pr(satire|[x]) = \frac{|satire\cap [x]|}{|[x]|}$ as the evaluation function, and $(\alpha ,\beta )$ as the acceptance and rejection thresholds BIBREF23, BIBREF9, BIBREF10, that is, Given an equivalence class $[x]$, if the conditional probability $Pr(satire|[x])$ is greater than or equal to the specified acceptance threshold $\alpha $, i.e., $Pr(satire|[x])\ge \alpha $, we accept the news in $[x]$ as $satirical$. If $Pr(satire|[x])$ is less than or equal to the specified rejection threshold $\beta $, i.e., $Pr(satire|[x])\le \beta $ we reject the news in $[x]$ as $satirical$, or we accept the news in $[x]$ as $legitimate$. If $Pr(satire|[x])$ is between $\alpha $ and $\beta $, i.e., $\beta <Pr(satire|[x])<\alpha $, we defer to make decisions on the news in $[x]$. Pawlak rough sets can be viewed as a special case of probabilistic rough sets with $(\alpha ,\beta )=(1,0)$. Given a pair of probabilistic thresholds $(\alpha , \beta )$, we can obtain a news classifier according to Equation (DISPLAY_FORM13). The three regions are a partition of the universe $U$, Then, the accuracy and coverage rate to evaluate the performance of the derived classifier are defined as follows BIBREF12, The criterion coverage indicates the proportions of news that can be confidently classified. Next, we will obtain $(\alpha , \beta )$ by game formulation and repetition learning. <<<Game Formulation>>> We construct a game $G=\lbrace O,S,u\rbrace $ given the set of game players $O$, the set of strategy profile $S$, and the payoff functions $u$, where the accuracy and coverage are two players, respectively, i.e., $O=\lbrace acc, cov\rbrace $. The set of strategy profiles $S=S_{acc}\times S_{cov}$, where $S_{acc}$ and $S_{cov} $ are sets of possible strategies or actions performed by players $acc$ and $cov$. The initial thresholds are set as $(1,0)$. All these strategies are the changes made on the initial thresholds, $c_{acc}$ and $c_{cov}$ denote the change steps used by two players, and their values are determined by the concrete experiment date set. Payoff functions. The payoffs of players are $u=(u_{acc},u_{cov})$, and $u_{acc}$ and $u_{cov}$ denote the payoff functions of players $acc$ and $cov$, respectively. Given a strategy profile $p=(s, t)$ with player $acc$ performing $s$ and player $cov$ performing $t$, the payoffs of $acc$ and $cov$ are $u_{acc}(s, t)$ and $u_{cov}(s, t)$. We use $u_{acc}(\alpha ,\beta )$ and $u_{cov}(\alpha ,\beta )$ to show this relationship. The payoff functions $u_{acc}(\alpha ,\beta )$ and $u_{cov}(\alpha ,\beta )$ are defined as, where $Acc_{(\alpha , \beta )}(Satire)$ and $Cov_{(\alpha , \beta )}(Satire)$ are the accuracy and coverage defined in Equations (DISPLAY_FORM15) and (DISPLAY_FORM16). Payoff table. We use payoff tables to represent the formulated game. Table TABREF20 shows a payoff table example in which both players have 3 strategies defined in Equation refeq:stategies. The arrow $\downarrow $ denotes decreasing a value and $\uparrow $ denotes increasing a value. On each cell, the threshold values are determined by two players. <<</Game Formulation>>> <<<Repetition Learning Mechanism>>> We repeat the game with the new thresholds until a balanced solution is reached. We first analyzes the pure strategy equilibrium of the game and then check if the stopping criteria are satisfied. Game equilibrium. The game solution of pure strategy Nash equilibrium is used to determine possible game outcomes in GTRS. The strategy profile $(s_{i},t_{j})$ is a pure strategy Nash equilibrium, if This means that none of players would like to change his strategy or they would loss benefit if deriving from this strategy profile, provided this player has the knowledge of other player's strategy. Repetition of games. Assuming that we formulate a game, in which the initial thresholds are $(\alpha , \beta )$, and the equilibrium analysis shows that the thresholds corresponding to the equilibrium are $(\alpha ^{*}, \beta ^{*})$. If the thresholds $(\alpha ^{*}, \beta ^{*})$ do not satisfy the stopping criterion, we will update the initial thresholds in the subsequent games. The initial thresholds of the new game will be set as $(\alpha ^{*}, \beta ^{*})$. If the thresholds $(\alpha ^{*}, \beta ^{*})$ satisfy the stopping criterion, we may stop the repetition of games. Stopping criterion. We define the stopping criteria so that the iterations of games can stop at a proper time. In this research, we set the stopping criterion as within the range of thresholds, the increase of one player's payoff is less than the decrease of the other player's payoff. <<</Repetition Learning Mechanism>>> <<</GTRS Decision Model>>> <<</Methodology>>> <<<Experiments>>> There are 8757 news records in our preprocessed data set. We use Jenks natural breaks BIBREF24 to discretize continuous variables $S_{N\!P}$ and $S_{Q\!P}$ both into five categories denoted by nominal values from 0 to 4, where larger values still fall into bins with larger nominal value. Let $D_{N\!P}$ and $D_{Q\!P}$ denote the discretized variables $S_{N\!P}$ and $S_{Q\!P}$, respectively. We derived the information table that only contains discrete features from our original dataset. A fraction of the information table is shown in Table TABREF23. The news whose condition attributes have the same values are classified in an equivalence class $X_i$. We derived 149 equivalence classes and calculated the corresponding probability $Pr(X_i)$ and condition probability $Pr(Satire|X_i)$ for each $X_i$. The probability $Pr(X_{i})$ denotes the ratio of the number of news contained in the equivalence class $X_i$ to the total number of news in the dataset, while the conditional probability $Pr(Satire|X_{i})$ is the proportion of news in $X_i$ that are satirical. We combine the equivalence classes with the same conditional probability and reduce the number of equivalence classes to 108. Table TABREF24 shows a part of the probabilistic data information about the concept satire. <<<Finding Thresholds with GTRS>>> We formulated a competitive game between the criteria accuracy and coverage to obtain the balanced probabilistic thresholds with the initial thresholds $(\alpha , \beta )=(1,0)$ and learning rate 0.03. As shown in the payoff table Table TABREF26, the cell at the right bottom corner is the game equilibrium whose strategy profile is ($\beta $ increases 0.06, $\alpha $ decreases 0.06). The payoffs of the players are (0.9784,0.3343). We set the stopping criterion as the increase of one player's payoff is less than the decrease of the other player's payoff when the thresholds are within the range. When the thresholds change from (1,0) to (0.94, 0.06), the accuracy is decreased from 1 to 0.9784 but the coverage is increased from 0.0795 to 0.3343. We repeat the game by setting $(0.94, 0.06)$ as the next initial thresholds. The competitive games are repeated seven times. The result is shown in Table TABREF27. After the eighth iteration, the repetition of game is stopped because the further changes on thresholds may cause the thresholds lay outside of the range $0 < \beta < \alpha <1$, and the final result is the equilibrium of the seventh game $(\alpha , \beta )=(0.52, 0.48)$. <<</Finding Thresholds with GTRS>>> <<<Results>>> We compare Pawlak rough sets, SVM, and our GTRS approach on the proposed dataset. Table TABREF29 shows the results on the experimental data. The SVM classifier achieved an accuracy of $78\%$ with a $100\%$ coverage. The Pawlak rough set model using $(\alpha , \beta )=(1,0)$ achieves a $100\%$ accuracy and a coverage ratio of $7.95\%$, which means it can only classify $7.95\%$ of the data. The classifier constructed by GTRS with $(\alpha , \beta )=(0.52, 0.48)$ reached an accuracy $82.71\%$ and a coverage $97.49\%$. which indicates that $97.49\%$ of data are able to be classified with accuracy of $82.71\%$. The remaining $2.51\%$ of data can not be classified without providing more information. To make our method comparable to other baselines such as SVM, we assume random guessing is made on the deferral region and present the modified accuracy. The modified accuracy for our approach is then $0.8271\times 0.9749 + 0.5 \times 0.0251 =81.89\%$. Our methods shows significant improvement as compared to Pawlak model and SVM. <<</Results>>> <<</Experiments>>> <<<Conclusion>>> In this paper, we propose a satirical news detection approach based on extracted semantic features and game-theoretic rough sets. In our mode, the semantic features extraction captures the inconsistency in the different structural parts of the sentences and the GTRS classifier can process the incomplete information based on repetitive learning and the acceptance and rejection thresholds. The experimental results on our created satirical and legitimate news tweets dataset show that our model significantly outperforms Pawlak rough set model and SVM. In particular, we demonstrate our model's ability to interpret satirical news detection from a semantic and information trade-off perspective. Other interesting extensions of our paper may be to use rough set models to extract the linguistic features at document level. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nRelated Work\nMethodology\nDataset\nSemantic Feature Extraction\nInconsistency in Noun Phrase Structures\nInconsistency Between Clauses\nInconsistency Between Named Entities and Noun Phrases\nWord Level Feature Using TF-IDF\nGTRS Decision Model\nGame Formulation\nRepetition Learning Mechanism\nExperiments\nFinding Thresholds with GTRS\nResults\nConclusion" ], "type": "outline" }
1910.10869
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Combining Acoustics, Content and Interaction Features to Find Hot Spots in Meetings <<<Abstract>>> Involvement hot spots have been proposed as a useful concept for meeting analysis and studied off and on for over 15 years. These are regions of meetings that are marked by high participant involvement, as judged by human annotators. However, prior work was either not conducted in a formal machine learning setting, or focused on only a subset of possible meeting features or downstream applications (such as summarization). In this paper we investigate to what extent various acoustic, linguistic and pragmatic aspects of the meetings can help detect hot spots, both in isolation and jointly. In this context, the openSMILE toolkit \cite{opensmile} is to used to extract features based on acoustic-prosodic cues, BERT word embeddings \cite{BERT} are used for modeling the lexical content, and a variety of statistics based on the speech activity are used to describe the verbal interaction among participants. In experiments on the annotated ICSI meeting corpus, we find that the lexical modeling part is the most informative, with incremental contributions from interaction and acoustic-prosodic model components. <<</Abstract>>> <<<Introduction and Prior Work>>> A definition of the meeting “hot spots” was first introduced in BIBREF2, where it was investigated whether human annotators could reliably identify regions in which participants are “highly involved in the discussion”. The motivation was that meetings generally have low information density and are tedious to review verbatim after the fact. An automatic system that could detect regions of high interest (as indicated by the involvement of the participants during the meeting) would thus be useful. Relatedly, automatic meeting summarization could also benefit from such information to give extra weight to hot spot regions in selecting or abstracting material for inclusion in the summary. Later work on the relationship between involvement and summarization BIBREF3 defined a different approach: hot spots are those regions chosen for inclusion in a summary by human annotators (“summarization hot spots”). In the present work we stick with the original “involvement hot spot” notion, and refer to such regions simply as “hot spots”, regardless of their possible role in summarization. We note that high involvement may be triggered both by a meeting's content (“what is being talked about”, and “what may be included in a textual summary”), as well as behavioral and social factors, such as a desire to participate, to stake out a position, or to oppose another participant. As related notion in dialog system research is “level of interest” BIBREF4. The initial research on hot spots focused on the reliability of human annotators and correlations with certain low-level acoustic features, such as pitch BIBREF2. Also investigated were the correlation between hot spots and dialog acts BIBREF5 and hot spots and speaker overlap BIBREF6, without however conducting experiments in automatic hot spot prediction using machine learning techniques. Laskowski BIBREF7 redefined the hot spot annotations in terms of time-based windows over meetings, and investigated various classifier models to detect “hotness” (i.e., elevated involvement). However, that work focused on only two types of speech features: presence of laughter and the temporal patterns of speech activity across the various participants, both of which were found to be predictive of involvement. For the related problem of level-of-interest prediction in dialog systems BIBREF8, it was found that content-based classification can also be effective, using both a discriminative TF-IDF model and lexical affect scores, as well as prosodic features. In line with the earlier hot spot research on interaction patterns and speaker overlap, turn-taking features were shown to be helpful for spotting summarization hot spots, in BIBREF3, and even more so than the human involvement annotations. The latter result confirms our intuition that summarization-worthiness and involvement are different notions of “hotness”. In this paper, following Laskowski, we focus on the automatic prediction of the speakers' involvement in sliding-time windows/segments. We evaluate machine learning models based on a range of features that can be extracted automatically from audio recordings, either directly via signal processing or via the use of automatic transcriptions (ASR outputs). In particular, we investigate the relative contributions of three classes of information: low-level acoustic-prosodic features, such as those commonly used in other paralinguistic tasks, such as sentiment analysis (extracted using openSMILE BIBREF0); spoken word content, as encoded with a state-of-the-art lexical embedding approach such as BERT BIBREF1; speaker interaction, based on speech activity over time and across different speakers. We attach lower importance to laughter, even though it was found to be highly predictive of involvement in the ICSI corpus, partly because we believe it would not transfer well to more general types of (e.g., business) meetings, and partly because laughter detection is still a hard problem in itself BIBREF9. Generation of speaker-attributed meeting transcriptions, on the other hand, has seen remarkable progress BIBREF10 and could support the features we focus on here. <<</Introduction and Prior Work>>> <<<Data>>> The ICSI Meeting Corpus BIBREF11 is a collection of meeting recordings that has been thoroughly annotated, including annotations for involvement hot spots BIBREF12, linguistic utterance units, and word time boundaries based on forced alignment. The dataset is comprised of 75 meetings and about 70 hours of real-time audio duration, with 6 speakers per meeting on average. Most of the participants are well-acquainted and friendly with each other. Hot spots were originally annotated with 8 levels and degrees, ranging from `not hot' to `luke warm' to `hot +'. Every utterance was labeled with one of these discrete labels by a single annotator. Hightened involvement is rare, being marked on only 1% of utterances. Due to the severe imbalance in the label distribution, Laskowski BIBREF13 proposed extending the involvement, or hotness, labels to sliding time windows. In our implementation (details below), this resulted in 21.7% of samples (windows) being labeled as “involved”. We split the corpus into three subsets: training, development, and evaluation, keeping meetings intact. Table TABREF4 gives statistics of these partitions. We were concerned with the relatively small number of meetings in the test sets, and repeated several of our experiments with a (jackknifing) cross-validation setup over the training set. The results obtained were very similar to those with the fixed train/test split results that we report here. <<<Time Windowing>>> As stated above, the corpus was originally labeled for hot spots at the utterance level, where involvement was marked by either a `b' or a `b+' label. Training and test samples for our experiments correspond to 60 s-long sliding windows, with a 15 s step size. If a certain window, e.g., a segment spanning the times 15 s ...75 s, overlaps with any involved speech utterance, then we label that whole window as `hot'. Fig. FIGREF6 gives a visual representation. <<</Time Windowing>>> <<<Metric>>> In spite of the windowing approach, the class distribution is still skewed, and an accuracy metric would reflect the particular class distribution in our data set. Therefore, we adopt the unweighted average recall (UAR) metric commonly used in emotion classification research. UAR is a reweighted accuracy where the samples of both classes are weighted equally in aggregate. UAR thus simulates a uniform class distribution. To match the objective, our classifiers are trained on appropriately weighted training data. Note that chance performance for UAR is by definition 50%, making results more comparable across different data sets. <<</Metric>>> <<</Data>>> <<<Feature Description>>> <<<Acoustic-Prosodic Features>>> Prosody encompasses pitch, energy, and durational features of speech. Prosody is thought to convey emphasis, sentiment, and emotion, all of which are presumably correlated with expressions of involvement. We used the openSMILE toolkit BIBREF0 to compute 988 features as defined by the emobase988 configuration file, operating on the close-talking meeting recordings. This feature set consists of low-level descriptors such as intensity, loudness, Mel-frequency cepstral coefficients, and pitch. For each low-level descriptor, functionals such as max/min value, mean, standard deviation, kurtosis, and skewness are computed. Finally, global mean and variance normalization are applied to each feature, using training set statistics. The feature vector thus captures acoustic-prosodic features aggregated over what are typically utterances. We tried extracting openSMILE features directly from 60 s windows, but found better results by extracting subwindows of 5 s, followed by pooling over the longer 60 s duration. We attribute this to the fact that emobase features are designed to operate on individual utterances, which have durations closer to 5 s than 60 s. <<</Acoustic-Prosodic Features>>> <<<Word-Based Features>>> <<<Bag of words with TF-IDF>>> Initially, we investigated a simple bag-of-words model including all unigrams, bigrams, and trigrams found in the training set. Occurrences of the top 10,000 n-grams were encoded to form a 10,000-dimensional vector, with values weighted according to TD-IDF. TF-IDF weights n-grams according to both their frequency (TF) and their salience (inverse document frequency, IDF) in the data, where each utterance was treated as a separate document. The resulting feature vectors are very sparse. <<</Bag of words with TF-IDF>>> <<<Embeddings>>> The ICSI dataset is too small to train a neural embedding model from scratch. Therefore, it is convenient to use the pre-trained BERT embedding architecture BIBREF1 to create an utterance-level embedding vector for each region of interest. Having been trained on a large text corpus, the resulting embeddings encode semantic similarities among utterances, and would enable generalization from word patterns seen in the ICSI training data to those that have not been observed on that limited corpus. We had previously also created an adapted version of the BERT model, tuned to to perform utterance-level sentiment classification, on a separate dataset BIBREF14. As proposed in BIBREF1, we fine-tuned all layers of the pre-trained BERT model by adding a single fully-connected layer and classifying using only the embedding corresponding to the classification ([CLS]) token prepended to each utterance. The difference in UAR between the hot spot classifiers using the pre-trained embeddings and those using the sentiment-adapted embeddings is small. Since the classifier using embeddings extracted by the sentiment-adapted model yielded slightly better performance, we report all results using these as input. To obtain a single embedding for each 60 s window, we experimented with various approaches of pooling the token and utterance-level embeddings. For our first approach, we ignored the ground-truth utterance segmentation and speaker information. We merged all words spoken within a particular window into a single contiguous span. Following BIBREF1, we added the appropriate classification and separation tokens to the text and selected the embedding corresponding to the [CLS] token as the window-level embedding. Our second approach used the ground-truth segmentation of the dialogue. Each speaker turn was independently modeled, and utterance-level embeddings were extracted using the representation corresponding to the [CLS] token. Utterances that cross window boundaries are truncated using the word timestamps, so only words spoken within the given time window are considered. For all reported experiments, we use L2-norm pooling to form the window-level embeddings for the final classifier, as this performed better than either mean or max pooling. <<</Embeddings>>> <<</Word-Based Features>>> <<<Speaker Activity Features>>> These features were a compilation of three different feature types: Speaker overlap percentages: Based on the available word-level times, we computed a 6-dimensional feature vector, where the $i$th index indicates the fraction of time that $i$ or more speakers are talking within a given window. This can be expressed by $\frac{t_i}{60}$ with $t_i$ indicating the time in seconds that $i$ or more people were speaking at the same time. Unique speaker count: Counts the unique speakers within a window, as a useful metric to track the diversity of participation within a certain window. Turn switch count: Counts the number of times a speaker begins talking within a window. This is a similar metric to the number of utterances. However, unlike utterance count, turn switches can be computed entirely from speech activity, without requiring a linguistic segmentation. <<</Speaker Activity Features>>> <<<Laughter Count>>> Laskowski found that laughter is highly predictive of involvement in the ICSI data. Laughter is annotated on an utterance level and falls into two categories: laughter solely on its own (no words) or laughter contained within an utterance (i.e. during speech). The feature is a simple tally of the number of times people laughed within a window. We include it in some of our experiments for comparison purposes, though we do not trust it as general feature. (The participants in the ICSI meetings are far too familiar and at ease with each other to be representative with regards to laughter.) <<</Laughter Count>>> <<</Feature Description>>> <<<Modeling>>> <<<Non-Neural Models>>> In preliminary experiments, we compared several non-neural classifiers, including logistic regression (LR), random forests, linear support vector machines, and multinomial naive Bayes. Logistic regression gave the best results all around, and we used it exclusively for the results shown here, unless neural networks are used instead. <<</Non-Neural Models>>> <<<Feed-Forward Neural Networks>>> <<<Pooling Techniques>>> For BERT and openSMILE vector classification, we designed two different feed-forward neural network architectures. The sentiment-adapted embeddings described in Section SECREF3 produce one 1024-dimensional vector per utterance. Since all classification operates on time windows, we had to pool over all utterances falling withing a given window, taking care to truncate words falling outside the window. We tested four pooling methods: L2-norm, mean, max, and min, with L2-norm giving the best results. As for the prosodic model, each vector extracted from openSMILE represents a 5 s interval. Since there was both a channel/speaker-axis and a time-axis, we needed to pool over both dimensions in order to have a single vector representing the prosodic features of a 60 s window. The second to last layer is the pooling layer, max-pooling across all the channels, and then mean-pooling over time. The output of the pooling layer is directly fed into the classifier. <<</Pooling Techniques>>> <<<Hyperparameters>>> The hyperparameters of the neural networks (hidden layer number and sizes) were also tuned in preliminary experiments. Details are given in Section SECREF5. <<</Hyperparameters>>> <<</Feed-Forward Neural Networks>>> <<<Model Fusion>>> Fig. FIGREF19 depicts the way features from multiple categories are combined. Speech activity and word features are fed directly into a final LR step. Acoustic-prosodic features are first combined in a feed-forward neural classifier, whose output log posteriors are in turn fed into the LR step for fusion. (When using only prosodic features, the ANN outputs are used directly.) <<</Model Fusion>>> <<</Modeling>>> <<<Experiments>>> We group experiments by the type of feaures they are based on: acoustic-prosodic, word-based, and speech activity, evaluating each group first by itself, and then in combination with others. <<<Speech Feature Results>>> As discussed in Section SECREF3, a multitude of input features were investigated, with some being more discriminative. The most useful speech activity features were speaker overlap percentage, number of unique speakers, and number of turn switches, giving evaluation set UARs of 63.5%, 63.9%, and 66.6%, respectively. When combined the UAR improved to 68.0%, showing that these features are partly complementary. <<</Speech Feature Results>>> <<<Word-Based Results>>> The TF-IDF model alone gave a UAR of 59.8%. A drastic increase in performance to 70.5% was found when using the BERT embeddings instead. Therefore we adopted embeddings for all further experiments based on word information. Three different types of embeddings were investigated, i.e. sentiment-adapted embeddings at an utterance-level, unadapted embeddings at the utterance-level, and unadapted embeddings over time windows. The adapted embeddings (on utterances) performed best, indicating that adaptation to sentiment task is useful for involvement classification. It is important to note, however, that the utterance-level embeddings are larger than the window-level embeddings. This is due to there being more utterances than windows in the meeting corpus. The best neural architecture we found for these embeddings is a 5-layer neural network with sizes 1024-64-32-12-2. Other hyperparameters for this model are dropout rate = 0.4, learning rate = $10^{-7}$ and activation function “tanh”. The UAR on the evaluation set with just BERT embeddings as input is 65.2%. Interestingly, the neural model was outperformed by a LR directly on the embedding vectors. Perhaps the neural network requires further fine-tuning, or the neural model is too prone to overfitting, given the small training corpus. In any case, we use LR on embeddings for all subsequent results. <<</Word-Based Results>>> <<<Acoustic-Prosodic Feature Results>>> Our prosodic model is a 5-layer ANN, as described in Section SECREF15. The architecture is: 988-512-128-16-Pool-2. The hyperparameters are: dropout rate = 0.4, learning rate = $10^{-7}$, activation = “tanh". The UAR on the evaluation set with just openSMILE features is 62.0%. <<</Acoustic-Prosodic Feature Results>>> <<<Fusion Results and Discussion>>> Table TABREF24 gives the UAR for each feature subset individually, for all features combined, and for a combination in which one feature subset in turn is left out. The one-feature-set-at-time results suggest that prosody, speech activity and words are of increasing importance in that order. The leave-one-out analysis agrees that the words are the most important (largest drop in accuracy when removed), but on that criterion the prosodic features are more important than speech-activity. The combination of all features is 0.4% absolute better than any other subset, showing that all feature subsets are partly complementary. Fig. FIGREF25 shows the same results in histogram form, but also add those with laughter information. Laughter count by itself is the strongest cue to involvement, as Laskowski BIBREF7 had found. However, even given the strong individual laughter feature, the other features add information, pushing the UAR from from 75.1% to 77.5%. <<</Fusion Results and Discussion>>> <<</Experiments>>> <<<Conclusion>>> We studied detection of areas of high involvement, or “hot spots”, within meetings using the ICSI corpus. The features that yielded the best results are in line with our intuitions. Word embeddings, speech activity features such a number of turn changes, and prosodic features are all plausible indicators of high involvement. Furthermore, the feature sets are partly complementary and yield best results when combined using a simple logistic regression model. The combined model achieves 72.6% UAR, or 77.5% with laughter feature. For future work, we would want to see a validation on an independent meeting collection, such as business meetings. Some features, in particular laughter, are bound not be as useful in this case. More data could also enable the training of joint models that perform an early fusion of the different feature types. Also, the present study still relied on human transcripts, and it would be important to know how much UAR suffers with a realistic amount of speech recognition error. Transcription errors are expected to boost the importance of the features types that do not rely on words. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction and Prior Work\nData\nTime Windowing\nMetric\nFeature Description\nAcoustic-Prosodic Features\nWord-Based Features\nBag of words with TF-IDF\nEmbeddings\nSpeaker Activity Features\nLaughter Count\nModeling\nNon-Neural Models\nFeed-Forward Neural Networks\nPooling Techniques\nHyperparameters\nModel Fusion\nExperiments\nSpeech Feature Results\nWord-Based Results\nAcoustic-Prosodic Feature Results\nFusion Results and Discussion\nConclusion" ], "type": "outline" }
1909.08103
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Simultaneous Speech Recognition and Speaker Diarization for Monaural Dialogue Recordings with Target-Speaker Acoustic Models <<<Abstract>>> This paper investigates the use of target-speaker automatic speech recognition (TS-ASR) for simultaneous speech recognition and speaker diarization of single-channel dialogue recordings. TS-ASR is a technique to automatically extract and recognize only the speech of a target speaker given a short sample utterance of that speaker. One obvious drawback of TS-ASR is that it cannot be used when the speakers in the recordings are unknown because it requires a sample of the target speakers in advance of decoding. To remove this limitation, we propose an iterative method, in which (i) the estimation of speaker embeddings and (ii) TS-ASR based on the estimated speaker embeddings are alternately executed. We evaluated the proposed method by using very challenging dialogue recordings in which the speaker overlap ratio was over 20%. We confirmed that the proposed method significantly reduced both the word error rate (WER) and diarization error rate (DER). Our proposed method combined with i-vector speaker embeddings ultimately achieved a WER that differed by only 2.1 % from that of TS-ASR given oracle speaker embeddings. Furthermore, our method can solve speaker diarization simultaneously as a by-product and achieved better DER than that of the conventional clustering-based speaker diarization method based on i-vector. <<</Abstract>>> <<<Introduction>>> Our main goal is to develop a monaural conversation transcription system that can not only perform automatic speech recognition (ASR) of multiple talkers but also determine who spoke the utterance when, known as speaker diarization BIBREF0, BIBREF1. For both ASR and speaker diarization, the main difficulty comes from speaker overlaps. For example, a speaker-overlap ratio of about 15% was reported in real meeting recordings BIBREF2. For such overlapped speech, neither conventional ASR nor speaker diarization provides a result with sufficient accuracy. It is known that mixing two speech significantly degrades ASR accuracy BIBREF3, BIBREF4, BIBREF5. In addition, no speaker overlaps are assumed with most conventional speaker diarization techniques, such as clustering of speech partitions (e.g. BIBREF0, BIBREF6, BIBREF7, BIBREF8, BIBREF9), which works only if there are no speaker overlaps. Due to these difficulties, it is still very challenging to perform ASR and speaker diarization for monaural recordings of conversation. One solution to the speaker-overlap problem is applying a speech-separation method such as deep clustering BIBREF10 or deep attractor network BIBREF11. However, a major drawback of such a method is that the training criteria for speech separation do not necessarily maximize the accuracy of the final target tasks. For example, if the goal is ASR, it will be better to use training criteria that directly maximize ASR accuracy. In one line of research using ASR-based training criteria, multi-speaker ASR based on permutation invariant training (PIT) has been proposed BIBREF3, BIBREF12, BIBREF13, BIBREF14, BIBREF15. With PIT, the label-permutation problem is solved by considering all possible permutations when calculating the loss function BIBREF16. PIT was first proposed for speech separation BIBREF16 and soon extended to ASR loss with promising results BIBREF3, BIBREF12, BIBREF13, BIBREF14, BIBREF15. However, a PIT-ASR model produces transcriptions for each utterance of speakers in an unordered manner, and it is no longer straightforward to solve speaker permutations across utterances. To make things worse, a PIT model trained with ASR-based loss normally does not produce separated speech waveforms, which makes speaker tracing more difficult. In another line of research, target-speaker (TS) ASR, which automatically extracts and transcribes only the target speaker's utterances given a short sample of that speaker's speech, has been proposed BIBREF17, BIBREF4. Žmolíková et al. proposed a target-speaker neural beamformer that extracts a target speaker's utterances given a short sample of that speaker's speech BIBREF17. This model was recently extended to handle ASR-based loss to maximize ASR accuracy with promising results BIBREF4. TS-ASR can naturally solve the speaker-permutation problem across utterances. Importantly, if we can execute TS-ASR for each speaker correctly, speaker diarization is solved at the same time just by extracting the start and end time information of the TS-ASR result. However, one obvious drawback of TS-ASR is that it cannot be applied when the speakers in the recordings are unknown because it requires a sample of the target speakers in advance of decoding. Based on this background, we propose a speech recognition and speaker diarization method that is based on TS-ASR but can be applied without knowing the speaker information in advance. To remove the limitation of TS-ASR, we propose an iterative method, in which (i) the estimation of target-speaker embeddings and (ii) TS-ASR based on the estimated embeddings are alternately executed. As an initial trial, we evaluated the proposed method by using real dialogue recordings in the Corpus of Spontaneous Japanese (CSJ). Although it contains the speech of only two speakers, the speaker-overlap ratio of the dialogue speech is very high; 20.1% . Thus, this is very challenging even for state-of-the-art ASR and speaker diarization. We show that the proposed method effectively reduced both word error rate (WER) and diarizaton error rate (DER). <<</Introduction>>> <<<Simultaneous ASR and Speaker Diarization>>> In this section, we first explain the problem we targeted then the proposed method with reference to Figure FIGREF1. <<<Problem statement>>> The overview of the problem is shown in Figure FIGREF1 (left). We assume a sequence of observations $\mathcal {X}=\lbrace {\bf X}_1,...,{\bf X}_U\rbrace $, where $U$ is the number of observations, and ${\bf X}_u$ is the $u$-th observation consisting of a sequence of acoustic features. Such a sequence is naturally generated when we separate a long recording into small segments based on voice activity detection which is a basic preprocess for ASR so as not to generate overly large lattices. We also assume a tuple of word hypotheses ${\bf W}_u=(W_{1,u},...,W_{J,u})$ for an observation ${\bf X}_u$ where $J$ is the number of speakers, and $W_{j,u}$ represents the speech-recognition hypothesis of the $j$-th speaker given observation ${\bf X}_u$. We assume $W_{j,u}$ contains not only word sequences but also their corresponding frame-level time alignments of phonemes and silences. Finally, we assume a tuple of speaker embeddings $\mathcal {E}=(e_1, ..., e_J)$, where $e_j\in \mathbb {R}^d$ represents the $d$-dim speaker embedding of the $j$-th speaker. Then, our objective is to find the best possible $\mathcal {W}=\lbrace {\bf W}_1,...,{\bf W}_U\rbrace $ given a sequence of observations $\mathcal {X}$ as follows. Here, the starting point is the conventional maximum a posteriori-based decoding given $\mathcal {X}$ but for multiple speakers. We then introduce the speaker embeddings $\mathcal {E}$ as a hidden variable (Eq. ). Finally, we approximate the summation by using a max operation (Eq. ). Our motivation to introduce $\mathcal {E}$, which is constant across all observation indices $u$, is to explicitly enforce the order of speakers in $\mathcal {W}$ to be constant over indices $u$. It should be emphasized that if we can solve the problem, speaker diarization is solved at the same time just by extracting the start and end time information of each hypothesis in $\mathcal {W}$. Also note that there are $J!$ possible solutions by swapping the order of speakers in $\mathcal {E}$, and it is sufficient to find just one such solution. <<</Problem statement>>> <<<Iterative maximization>>> It is not easy to directly solve $P(\mathcal {W},\mathcal {E}|\mathcal {X})$, so we propose to alternately maximize $\mathcal {W}$ and $\mathcal {E}$. Namely, we first fix $\underline{\mathcal {W}}$ and find $\mathcal {E}$ that maximizes $P(\underline{\mathcal {W}},\mathcal {E}|\mathcal {X})$. We then fix $\underline{\mathcal {E}}$ and find $\mathcal {W}$ that maximizes $P(\mathcal {W},\underline{\mathcal {E}}|\mathcal {X})$. By iterating this procedure, $P(\mathcal {W},\mathcal {E}|\mathcal {X})$ can be increased monotonically. Note that it can be said by a simple application of the chain rule that finding $\mathcal {E}$ that maximizes $P(\underline{\mathcal {W}},\mathcal {E}|\mathcal {X})$ with a fixed $\underline{\mathcal {W}}$ is equivalent to finding $\mathcal {E}$ that maximizes $P(\mathcal {E}|\underline{\mathcal {W}},\mathcal {X})$. The same thing can be said for the estimation of $\mathcal {W}$ with a fixed $\underline{\mathcal {E}}$. For the $(i)$-th iteration of the maximization ($i\in \mathbb {Z}^{\ge 0}$), we first find the most plausible estimation of $\mathcal {E}$ given the $(i-1)$-th speech-recognition hypothesis $\tilde{\mathcal {W}}^{(i-1)}$ as follows. Here, the estimation of $\tilde{\mathcal {E}}^{(i)}$ is dependent on $\tilde{\mathcal {W}}^{(i-1)}$ for $i \ge 1$. Assume that the overlapped speech corresponds to a “third person” who is different from any person in the recording, Eq. DISPLAY_FORM5 can be achieved by estimating the speaker embeddings only from non-overlapped regions (upper part of Figure FIGREF1 (right)). In this study, we used i-vector BIBREF18 as the representation of speaker embeddings, and estimated i-vector based only on the non-overlapped region given $\tilde{\mathcal {W}}^{(i-1)}$ for each speaker. Note that, since we do not have an estimation of $\mathcal {W}$ for the first iteration, $\tilde{\mathcal {E}}^{(0)}$ is initialized only by $\mathcal {X}$. In this study, we estimated the i-vector for each speaker given the speech region that was estimated by the clustering-based speaker diarization method. More precicely, we estimated the i-vector for each ${\bf X}_u$ then applied $J$-cluster K-means clustering. The center of each cluster was used for the initial speaker embeddings $\tilde{\mathcal {E}}^{(0)}$. We then update $\mathcal {W}$ given speaker embeddings $\tilde{\mathcal {E}}^{(i)}$. Here, we estimate the most plausible hypotheses $\mathcal {W}$ given estimated embeddings $\tilde{\mathcal {E}}^{(i)}$ and observation $\mathcal {X}$ (Eq. DISPLAY_FORM8). We then assume the conditional independence of ${\bf W}_u$ given ${\bf X}_u$ for each segment $u$ (Eq. ). Finally, we further assume the conditional independence of $W_{j,u}$ given $\tilde{e}_j^{(i)}$ for each speaker $j$ (Eq. ). The final equation can be solved by applying TS-ASR for each segment $u$ for each speaker $j$ (lower part of Figure FIGREF1 (right)). We will review the detail of TS-ASR in the next section. <<</Iterative maximization>>> <<</Simultaneous ASR and Speaker Diarization>>> <<<TS-ASR: Review>>> <<<Overview of TS-ASR>>> TS-ASR is a technique to extract and recognize only the speech of a target speaker given a short sample utterance of that speaker BIBREF17, BIBREF21, BIBREF4. Originally, the sample utterance was fed into a special neural network that outputs an averaged embedding to control the weighting of speaker-dependent blocks of the acoustic model (AM). However, to make the problem simpler, we assume that a $d$-dimensional speaker embedding $e_{\rm tgt}\in \mathbb {R}^d$ is extracted from the sample utterance. In this context, TS-ASR can be expressed as the problem to find the best hypothesis $W_{\rm tgt}$ given observation ${\bf X}$ and speaker embedding $e_{\rm tgt}$ as follows. If we have a well-trained TS-ASR, Eq. can be solved by simply applying the TS-ASR for each segment $u$ for each speaker $j$. <<</Overview of TS-ASR>>> <<<TS-AM with auxiliary output network>>> <<<Overview>>> Although any speech recognition architecture can be used for TS-ASR, we adopted a variant of the TS-AM that was recently proposed and has promising accuracy BIBREF5. Figure FIGREF13 describes the TS-AM that we applied for this study. This model has two input branches. One branch accepts acoustic features ${\bf X}$ as a normal AM while the other branch accepts an embedding $e_{\rm tgt}$ that represents the characteristics of the target speaker. In this study, we used a log Mel-filterbank (FBANK) and i-vector BIBREF18, BIBREF22 for the acoustic features and target-speaker embedding, respectively. A unique component of the model is in its output branch. The model has multiple output branches that produce outputs ${\bf Y}^{\rm tgt}$ and ${\bf Y}^{\rm int}$ for the loss functions for the target and interference speakers, respectively. The loss for the target speaker is defined to maximize the target-speaker ASR accuracy, while the loss for interference speakers is defined to maximize the interference-speaker ASR accuracy. We used lattice-free maximum mutual information (LF-MMI) BIBREF23 for both criteria. The original motivation of the output branch for interference speakers was the improvement of TS-ASR by achieving a better representation for speaker separation in the shared layers. However, it was also shown that the output branch for interference speakers can be used for the secondary ASR for interference speakers given the embedding of the target speaker BIBREF5. In this paper, we found out that the latter property worked very well for the ASR for dialogue recordings, which will be explained in the evaluation section. The network is trained with a mixture of multi-speaker speech given their transcriptions. We assume that, for each training sample, (a) transcriptions of at least two speakers are given, (b) the transcription for the target speaker is marked so that we can identify the target speaker's transcription, and (c) a sample for the target speaker can be used to extract speaker embeddings. These assumptions can be easily satisfied by artificially generating training data by mixing the speech of multiple speakers. <<</Overview>>> <<<Loss function>>> The main loss function for the target speaker is defined as where $u$ corresponds to the index of training samples in this case. The term $\mathcal {G}^{\rm tgt}_u$ indicates a numerator (or reference) graph that represents a set of possible correct state sequences for the utterance of the target speaker of the $u$-th training sample, ${\bf S}$ denotes a hypothesis state sequence for the $u$-th training sample, and $\mathcal {G}^{D}$ denotes a denominator graph, which represents a possible hypothesis space and normally consists of a 4-gram phone language model in LF-MMI training BIBREF23. The auxiliary interference speaker loss is then defined to maximize the interference-speaker ASR accuracy, which we expect to enhance the speaker separation ability of the neural network. This loss is defined as where $\mathcal {G}^{\rm int}_u$ denotes a numerator (or reference) graph that represents a set of possible correct state sequences for the utterance of the interference speaker of the $u$-th training sample. Finally, the loss function $\mathcal {F}^{\rm comb}$ for training is defined as the combination of the target and interference losses, where $\alpha $ is the scaling factor for the auxiliary loss. In our evaluation, we set $\alpha =1.0$. Setting $\alpha =0.0$, however, corresponds to normal TS-ASR. <<</Loss function>>> <<</TS-AM with auxiliary output network>>> <<</TS-ASR: Review>>> <<<Evaluation>>> <<<Experimental settings>>> <<<Main evaluation data: real dialogue recordings>>> We conducted our experiments on the CSJ BIBREF25, which is one of the most widely used evaluation sets for Japanese speech recognition. The CSJ consists of more than 600 hrs of Japanese recordings. While most of the content is lecture recordings by a single speaker, CSJ also contains 11.5 hrs of 54 dialogue recordings (average 12.8 min per recording) with two speakers, which were the main target of ASR and speaker diarization in this study. During the dialogue recordings, two speakers sat in two adjacent sound proof chambers divided by a glass window. They could talk with each other over voice connection through a headset for each speaker. Therefore, speech was recorded separately for each speaker, and we generated mixed monaural recordings by mixing the corresponding speeches of two speakers. When mixing two recordings, we did not apply any normalization of speech volume. Due to this recording procedure, we were able to use non-overlapped speech to evaluate the oracle WERs. It should be noted that, although the dialogue consisted of only two speakers, the speaker overlap ratio of the recordings was very high due to many backchannels and natural turn-taking. Among all recordings, 16.7% of the region was overlapped speech while 66.4% was spoken by a single speaker. The remaining 16.9% was silence. Therefore, 20.1% (=16.7/(16.7+66.4)) of speech regions was speaker overlap. From the viewpoint of ASR, 33.5% (= (16.7*2)/(16.7*2+66.4)) of the total duration to be recognized was overlapped. These values were even higher than those reported for meetings with more than two speakers BIBREF26, BIBREF2. Therefore, these dialogue recordings are very challenging for both ASR and speaker diarization. We observed significantly high WER and DER, which is discussed in the next section. <<</Main evaluation data: real dialogue recordings>>> <<<Sub evaluation data: simulated 2-speaker mixture>>> To evaluate TS-ASR, we also used the simulated 2-speaker-mixed data by mixing the three official single-speaker evaluation sets of CSJ, i.e., E1, E2, and E3 BIBREF27. Each set includes different groups of 10 lectures (5.6 hrs, 30 lectures in total). The E1 set consists of 10 lectures of 10 male speakers, and E2 and E3 each consists of 10 lectures of 5 female and 5 male speakers. We generate two-speaker mixed speech by adding randomly selected speech (= interference-speaker speech) to the original speech (= target-speaker speech) with the constraint that the target and interference speakers were different, and each interference speaker was selected only once from the dataset. When we mixed the two speeches, we configured them to have the same power level, and shorter speech was mixed with the longer speech from a random starting point selected to ensure the end point of the shorter one did not exceed that of the longer one. <<</Sub evaluation data: simulated 2-speaker mixture>>> <<<Training data and training settings>>> The rest of the 571 hrs of 3,207 lecture recordings (excluding the same speaker's lectures in the evaluation sets) were used for AM and language model (LM) training. We generated two-speaker mixed speech for training data in accordance with the following protocol. Prepare a list of speech samples (= main list). Shuffle the main list to create a second list under the constraint that the same speaker does not appear in the same line in the main and second lists. Mix the audio in the main and second lists one-by-one with a specific signal-to-interference ratio (SIR). For training data, we randomly sampled an SIR as follows. In 1/3 probability, sample the SIR from a uniform distribution between -10 and 10 dB. In 1/3 probability, sample the SIR from a uniform distribution between 10 and 60 dB. The transcription of the interference speaker was set to null. In 1/3 probability, sample the SIR from a uniform distribution between -60 and -10 dB. The transcription of the target speaker was set to null. The volume of each mixed speech was randomly changed to enhance robustness against volume difference. A speech for extracting a speaker embedding was also randomly selected for each speech mixture from the main list. Note that the random perturbation of volume was applied only for the training data, not for evaluation data. We trained a TS-AM consisting of a convolutional neural network (CNN), time-delay NN (TDNN) BIBREF28, and long short-term memory (LSTM) BIBREF29, as shown in fig:ts-am. The input acoustic feature for the network was a 40-dimensional FBANK without normalization. A 100-dimensional i-vector was also extracted and used for the target-speaker embedding to indicate the target speaker. For extracting this i-vector, we randomly selected an utterance of the same speaker. We conducted 8 epochs of training on the basis of LF-MMI, where the initial learning rate was set to 0.001 and exponentially decayed to 0.0001 by the end of the training. We applied $l2$-regularization and CE-regularization BIBREF23 with scales of 0.00005 and 0.1, respectively. The leaky hidden Markov model coefficient was set to 0.1. A backstitch technique BIBREF30 with a backstitch scale of 1.0 and backstitch interval of 4 was also used. For comparison, we trained another TS-AM without the auxiliary loss. We also trained a “clean AM” using clean, non-speaker-mixed speech. For this clean model, we used a model architecture without the auxiliary output branch, and an i-vector was extracted every 100 msec for online speaker/environment adaptation. In decoding, we used a 4-gram LM trained using the transcription of the training data. All our experiments were conducted on the basis of the Kaldi toolkit BIBREF31. <<</Training data and training settings>>> <<</Experimental settings>>> <<<Preliminary experiment with simulated 2-speaker mixture>>> <<<Evaluation of TS-ASR>>> We first evaluated the TS-AM with two-speaker mixture of the E1, E2, and E3 evaluation sets. For each test utterance, a sample of the target speaker was randomly selected from the other utterances in the test set. We used the same random seed over all experiments, so that they could be conducted under the same conditions. The results are listed in Table TABREF32. Although the clean AM produced a WER of 7.90% for the original clean dataset, the WER severely degraded to 88.03% by mixing two speakers. The TS-AM then significantly recovered the WER to 20.78% ($\alpha =0.0$). Although the improvement was not so significant compared with that reported in BIBREF5, the auxiliary loss further improved the WER to 20.53% ($\alpha =1.0$). Note that E1 contains only male speakers while E2 and E3 contain both female and male speakers. Because of this, E1 showed larger degradation of WER when 2 speakers were mixed. <<</Evaluation of TS-ASR>>> <<</Preliminary experiment with simulated 2-speaker mixture>>> <<</Evaluation>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nSimultaneous ASR and Speaker Diarization\nProblem statement\nIterative maximization\nTS-ASR: Review\nOverview of TS-ASR\nTS-AM with auxiliary output network\nOverview\nLoss function\nEvaluation\nExperimental settings\nMain evaluation data: real dialogue recordings\nSub evaluation data: simulated 2-speaker mixture\nTraining data and training settings\nPreliminary experiment with simulated 2-speaker mixture\nEvaluation of TS-ASR" ], "type": "outline" }
1911.08829
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Casting a Wide Net: Robust Extraction of Potentially Idiomatic Expressions <<<Abstract>>> Idiomatic expressions like `out of the woods' and `up the ante' present a range of difficulties for natural language processing applications. We present work on the annotation and extraction of what we term potentially idiomatic expressions (PIEs), a subclass of multiword expressions covering both literal and non-literal uses of idiomatic expressions. Existing corpora of PIEs are small and have limited coverage of different PIE types, which hampers research. To further progress on the extraction and disambiguation of potentially idiomatic expressions, larger corpora of PIEs are required. In addition, larger corpora are a potential source for valuable linguistic insights into idiomatic expressions and their variability. We propose automatic tools to facilitate the building of larger PIE corpora, by investigating the feasibility of using dictionary-based extraction of PIEs as a pre-extraction tool for English. We do this by assessing the reliability and coverage of idiom dictionaries, the annotation of a PIE corpus, and the automatic extraction of PIEs from a large corpus. Results show that combinations of dictionaries are a reliable source of idiomatic expressions, that PIEs can be annotated with a high reliability (0.74-0.91 Fleiss' Kappa), and that parse-based PIE extraction yields highly accurate performance (88% F1-score). Combining complementary PIE extraction methods increases reliability further, to over 92% F1-score. Moreover, the extraction method presented here could be extended to other types of multiword expressions and to other languages, given that sufficient NLP tools are available. <<</Abstract>>> <<<Introduction>>> Idiomatic expressions pose a major challenge for a wide range of applications in natural language processing BIBREF0. These include machine translation BIBREF1, BIBREF2, semantic parsing BIBREF3, sentiment analysis BIBREF4, and word sense disambiguation BIBREF5. Idioms show significant syntactic and morphological variability (e.g. beans being spilled for spill the beans), which makes them hard to find automatically. Moreover, their non-compositional nature makes idioms really hard to interpret, because their meaning is often very different from the meanings of the words that make them up. Hence, successful systems need not only be able to recognise idiomatic expressions in text or dialogue, but they also need to give a proper interpretation to them. As a matter of fact, current language technology performs badly on idiom understanding, a phenomenon that perhaps has not received enough attention. Nearly all current language technology used in NLP applications is based on supervised machine learning. This requires large amounts of labelled data. In the case of idiom interpretation, however, only small datasets are available. These contain just a couple of thousand idiom instances, covering only about fifty different types of idiomatic expressions. In fact, existing annotated corpora tend to cover only a small set of idiom types, comprising just a few syntactic patterns (e.g., verb-object combinations), of which a limited number of instances are extracted from a large corpus. This is not surprising as preparing and compiling such corpora involves a large amount of manual extraction work, especially if one wants to allow for form variation in the idiomatic expressions (for example, extracting cooking all the books for cook the books). This work involves both the crafting of syntactic patterns to match potential idiomatic expressions and the filtering of false extractions (non-instances of the target expression e.g. due to wrong parses), and increases with the amount of idiom types included in the corpus (which, in the worst case, means an exponential increase in false extractions). Thus, building a large corpus of idioms, especially one that covers many types in many syntactic constructions, is costly. If a high-precision, high-recall system can be developed for the task of extracting the annotation candidates, this cost will be greatly reduced, making the construction of a large corpus much more feasible. The variability of idioms has been a significant topic of interest among researchers of idioms. For example, BIBREF6 investigates the internal and external modification of a set of idioms in a large English corpus, whereas BIBREF7, quantifies and classifies the variation of a set of idioms in a large corpus of Dutch, setting up a useful taxonomy of variation types. Both find that, although idiomatic expressions mainly occur in their dictionary form, there is a significant minority of idiom instances that occur in non-dictionary variants. Additionally, BIBREF8 show that idiom variants retain their idiomatic meaning more often and are processed more easily than previously assumed. This emphasises the need for corpora covering idiomatic expressions to include these variants, and for tools to be robust in dealing with them. As such, the aim of this article is to describe methods and provide tools for constructing larger corpora annotated with a wider range of idiom types than currently in existence due to the reduced amount of manual labour required. In this way we hope to stimulate further research in this area. In contrast to previous approaches, we want to catch as many idiomatic expressions as possible, and we achieve this by casting a wide net, that is, we consider the widest range of possible idiom variants first and then filter out any bycatch in a way that requires the least manual effort. We expect research will benefit from having larger corpora by improving evaluation quality, by allowing for the training of better supervised systems, and by providing additional linguistic insight into idiomatic expressions. A reliable method for extracting idiomatic expressions is not only needed for building an annotated corpus, but can also be used as part of an automatic idiom processing pipeline. In such a pipeline, extracting potentially idiomatic expressions can be seen as a first step before idiom disambiguation, and the combination of the two modules then functions as an complete idiom extraction system. The main research question that we aim to answer in this article is whether dictionary-based extraction of potentially idiomatic expressions is robust and reliable enough to facilitate the creation of wide-coverage sense-annotated idiom corpora. By answering this question we make several contributions to research on multiword expressions, in particular that of idiom extraction. Firstly, we provide an overview of existing research on annotating idiomatic expressions in corpora, showing that current corpora cover only small sets of idiomatic types (Section SECREF3). Secondly, we quantify the coverage and reliability of a set of idiom dictionaries, demonstrating that there is little overlap between resources (Section SECREF4). Thirdly, we develop and release an evaluation corpus for extracting potentially idiomatic expressions from text (Section SECREF5). Finally, various extraction systems and combinations thereof are implemented, made available to the research community, and evaluated empirically (Section SECREF6). <<</Introduction>>> <<<New Terminology: Potentially Idiomatic Expression (PIE)>>> The ambiguity of phrases like wake up and smell the coffee poses a terminological problem. Usually, these phrases are called idiomatic expressions, which is suitable when they are used in an idiomatic sense, but not so much when they are used in a literal sense. Therefore, we propose a new term: potentially idiomatic expressions, or PIEs for short. The term potentially idiomatic expression refers to those expressions which can have an idiomatic meaning, regardless of whether they actually have that meaning in a given context. So, see the light is a PIE in both `After another explanation, I finally saw the light' and `I saw the light of the sun through the trees', while it is an idiomatic expression in the first context, and a literal phrase in the latter context. The processing of PIEs involves three main challenges: the discovery of (new) PIE types, the extraction of instances of known PIE types in text, and the disambiguation of PIE instances in context. Here, we propose calling the discovery task simply PIE discovery, the extraction task simply PIE extraction, and the disambiguation task PIE disambiguation. Note that these terms contrast with the terms used in existing research. There, the discovery task is called type-based idiom detection and the disambiguation task is called token-based idiom detection (cf. BIBREF10, BIBREF11), although this usage is not always consistent. Because these terms are very similar, they are potentially confusing, and that is why we propose novel terminology. Other terminology comes from literature on multiword expressions (MWEs) more generally, i.e. not specific to idioms. Here, the task of finding new MWE types is called MWE discovery and finding instances of known MWE types is called MWE identification BIBREF12. Note, however, that MWE identification generally consists of finding only the idiomatic usages of these types (e.g. BIBREF13). This means that MWE identification consists of both the extraction and disambiguation tasks, performed jointly. In this work, we propose to split this into two separate tasks, and we are concerned only with the PIE extraction part, leaving PIE disambiguation as a separate problem. <<</New Terminology: Potentially Idiomatic Expression (PIE)>>> <<<Related Work>>> This section is structured so as to reflect the dual contribution of the present work. First, we discuss existing resources annotated for idiomatic expressions. Second, we discuss existing approaches to the automatic extraction of idioms. <<<Annotated Corpora and Annotation Schemes for Idioms>>> There are four sizeable sense-annotated PIE corpora for English: the VNC-Tokens Dataset BIBREF9, the Gigaword dataset BIBREF14, the IDIX Corpus BIBREF10, and the SemEval-2013 Task 5 dataset BIBREF15. An overview of these corpora is presented in Table TABREF7. <<<VNC-Tokens>>> The VNC-Tokens dataset contains 53 different PIE types. BIBREF9 extract up to 100 instances from the British National Corpus for each type, for a total of 2,984 instances. These types are based on a pre-existing list of verb-noun combinations and were filtered for frequency and whether two idiom dictionaries both listed them. Instances were extracted automatically, by parsing the corpus and selecting all sentences with the right verb and noun in a direct-object relation. It is unclear whether the extracted sentences were manually checked, but no false extractions are mentioned in the paper or present in the dataset. All extracted PIE instances were annotated for sense as either idiomatic, literal or unclear. This is a self-explanatory annotation scheme, but BIBREF9 note that senses are not binary, but can form a continuum. For example, the idiomaticity of have a word in `You have my word' is different from both the literal sense in `The French have a word for this' and the figurative sense in `My manager asked to have a word'. They instructed annotators to choose idiomatic or literal even in ambiguous middle-of-the-continuum cases, and restrict the unclear label only to cases where there is not enough context to disambiguate the meaning of the PIE. <<</VNC-Tokens>>> <<<Gigaword>>> BIBREF14 present a corpus of 17 PIE types, for which they extracted all instances from the Gigaword corpus BIBREF18, yielding a total of 3,964 instances. BIBREF14 extracted these instances semi-automatically by manually defining all inflectional variants of the verb in the PIE and matching these in the corpus. They did not allow for inflectional variations in non-verb words, nor did they allow intervening words. They annotated these potential idioms as either literal or figurative, excluding ambiguous and unclear instances from the dataset. <<</Gigaword>>> <<<IDIX>>> BIBREF10 build on the methodology of BIBREF14, but annotate a larger set of idioms (52 types) and extract all occurrences from the BNC rather than the Gigaword corpus, for a total of 4,022 instances including false extractions. BIBREF10 use a more complex semi-automatic extraction method, which involves parsing the corpus, manually defining the dependency patterns that match the PIE, and extracting all sentences containing those patterns from the corpus. This allows for larger form variations, including intervening words and inflectional variation of all words. In some cases, this yields many non-PIE extractions, as for recharge one's batteries in Example SECREF10. These were not filtered out before annotation, but rather filtered out as part of the annotation process, by having false extraction as an additional annotation label. For sense annotation, they use an extensive tagset, distinguishing literal, non-literal, both, meta-linguistic, embedded, and undecided labels. Here, the both label (Example SECREF10) is used for cases where both senses are present, often as a form of deliberate word play. The meta-linguistic label (Example SECREF10) applies to cases where the PIE instance is used as a linguistic item to discuss, not as part of a sentence. The embedded label (Example SECREF10) applies to cases where the PIE is embedded in a larger figurative context, which makes it impossible to say whether a literal or figurative sense is more applicable. The undecided label is used for unclear and undecidable cases. They take into account the fact that a PIE can have multiple figurative senses, and enumerate these separately as part of the annotation. . These high-performance, rugged tools are claimed to offer the best value for money on the market for the enthusiastic d-i-yer and tradesman, and for the first time offer the possibility of a battery recharging time of just a quarter of an hour. (from IDIX corpus, ID #314) . Left holding the baby, single mothers find it hard to fend for themselves. (from BIBREF10, p.642) . It has long been recognised that expressions such as to pull someone's leg, to have a bee in one's bonnet, to kick the bucket, to cook someone's goose, to be off one's rocker, round the bend, up the creek, etc. are semantically peculiar. (from BIBREF10, p.642) . You're like a restless bird in a cage. When you get out of the cage, you'll fly very high. (from BIBREF10, p.642) The both, meta-linguistic, and embedded labels are useful and linguistically interesting distinctions, although they occur very rarely (0.69%, 0.15%, and an unknown %, respectively). As such, we include these cases in our tagset (see Section SECREF5), but group them under a single label, other, to reduce annotation complexity. We also follow BIBREF10 in that we combine both the PIE/non-PIE annotation and the sense annotation in a single task. <<</IDIX>>> <<<SemEval-2013 Task 5b>>> BIBREF15 created a dataset for SemEval-2013 Task 5b, a task on detecting semantic compositionality in context. They selected 65 PIE types from Wiktionary, and extracted instances from the ukWaC corpus BIBREF17, for a total of 4,350 instances. It is unclear how they extracted the instances, and how much variation was allowed for, although there is some inflectional variation in the dataset. An unspecified amount of manual filtering was done on the extracted instances. The extracted PIE instances were labelled as literal, idiomatic, both, or undecidable. Interestingly, they crowdsourced the sense annotations using CrowdFlower, with high agreement (90%–94% pairwise). Undecidable cases and instances on which annotators disagreed were removed from the dataset. <<</SemEval-2013 Task 5b>>> <<<General Multiword Expression Corpora>>> In addition to the aforementioned idiom corpora, there are also corpora focused on multiword expressions (MWEs) in a more general sense. As idioms are a subcategory of MWEs, these corpora also include some idioms. The most important of these are the PARSEME corpus BIBREF19 and the DiMSUM corpus BIBREF20. DiMSUM provides annotations of over 5,000 MWEs in approximately 90K tokens of English text, consisting of reviews, tweets and TED talks. However, they do not categorise the MWEs into specific types, meaning we cannot easily quantify the number of idioms in the corpus. In contrast to the corpus-specific sense labels seen in other corpora, DiMSUM annotates MWEs with WordNet supersenses, which provide a broad category of meaning for each MWE. Similarly, the PARSEME corpus consists of over 62K MWEs in almost 275K tokens of text across 18 different languages (with the notable exception of English). The main differences with DiMSUM, except for scale and multilingualism, are that it only includes verbal MWEs, and that subcategorisation is performed, including a specific category for idioms. Idioms make up almost a quarter of all verbal MWEs in the corpus, although the proportion varies wildly between languages. In both corpora, MWE annotation was done in an unrestricted manner, i.e. there was not a predefined set of expressions to which annotation was restricted. <<</General Multiword Expression Corpora>>> <<<Overview>>> In sum, there is large variation in corpus creation methods, regarding PIE definition, extraction method, annotation schemes, base corpus, and PIE type inventory. Depending on the goal of the corpus, the amount of deviation that is allowed from the PIE's dictionary form to the instances can be very little BIBREF14, to quite a lot BIBREF10. The number of PIE types covered by each corpus is limited, ranging from 17 to 65 types, often limited to one or more syntactic patterns. The extraction of PIE instances is usually done in a semi-automatic manner, by manually defining patterns in a text or parse tree, and doing some manual filtering afterwards. This works well, but an extension to a large number of PIE types (e.g. several hundreds) would also require a large increase in the amount of manual effort involved. Considering the sense annotations done on the PIE corpora, there is significant variation, with BIBREF9 using only three tags, whereas BIBREF10 use six. Outside of PIE-specific corpora there are MWE corpora, which provide a different perspective. A major difference there is that annotation is not restricted to a pre-specified set of expressions, which has not been done for PIEs specifically. <<</Overview>>> <<</Annotated Corpora and Annotation Schemes for Idioms>>> <<<Extracting Idioms from Corpora>>> There are two main approaches to idiom extraction. The first approach aims to distinguish idioms from other multiword phrases, where the main purpose is to expand idiom inventories with rare or novel expressions BIBREF21, BIBREF22, BIBREF23, BIBREF24. The second approach aims to extract all occurrences of a known idiomatic expression in a text. In this paper, we focus on the latter approach. We rely on idiom dictionaries to provide a list of PIE types, and build a system that extracts all instances of those PIE types from a corpus. High-quality idiom dictionaries exist for most well-resourced languages, but their reliability and coverage is not known. As such, we quantify the coverage of dictionaries in Section SECREF4. There is, to the best of our knowledge, no existing work that focuses on dictionary-based PIE extraction. However, there is closely-related work by BIBREF25, who present a system for the dictionary-based extraction of verb-noun combinations (VNCs) in English and Spanish. In their case, the VNCs can be any kind of multiword expression, which they subdivide into literal expressions, collocations, light verb constructions, metaphoric expressions, and idioms. They extract 173 English VNCs and 150 Spanish VNCs and annotate these with both their lexico-semantic MWE type and the amount of morphosyntactic variation they exhibit. BIBREF25 then compare a word sequence-based method, a chunking-based method, and a parse-based method for VNC extraction. Each method relies on the morpho-syntactic information in order to limit false extractions. Precision is evaluated manually on a sample of the extracted VNCs, and recall is estimated by calculating the overlap between the output of the three methods. Evaluation shows that the methods are highly complementary both in recall, since they extract different VNCs, and in precision, since combining the extractors yields fewer false extractions. Whereas BIBREF25 focus on both idiomatic and literal uses of the set of expressions, like in this paper, BIBREF26 tackle only half of that task, namely extracting only literal uses of a given set of VMWEs in Polish. This complicates the task, since it combines extracting all occurrences of the VMWEs and then distinguishing literal from idiomatic uses. Interestingly, they also experiment with models of varying complexity, i.e. just words, part-of-speech tags, and syntactic structures. Their results are hard to put into perspective however, since the frequency of literal VMWEs in their corpus is very rare, whereas corpora containing PIEs tend to show a more balanced distribution. Other similar work to ours also focuses on MWEs more generally, or on different subtypes of MWEs. In addition, these tend to combine both extraction and disambiguation in that they aim to extract only idiomatically used instances of the MWE, without extracting literally used instances or non-instances. Within this line of work, BIBREF27 focuses on verb-particle constructions, BIBREF28 on verbal MWEs (including idioms), and BIBREF29 on verbal MWEs (especially non-canonical variants). Both BIBREF28 and BIBREF29 rely on a pre-defined set of expressions, whereas BIBREF27 also extracts unseen expressions, although based on a pre-defined set of particles and within the vary narrow syntactic frame of verb-particle constructions. The work of BIBREF27 is most similar to ours in that it builds an unsupervised system using existing NLP tools (PoS taggers, chunkers, parsers) and finds that a combination of systems using those tools performs best, as we find in Section SECREF69. BIBREF28 and BIBREF29, by contrast, use supervised classifiers which require training data, not just for the task in general, but specific to the set of expressions used in the task. Although our approach is similar to that of BIBREF25, both in the range of methods used and in the goal of extracting certain multiword expressions regardless of morphosyntactic variation, there are two main differences. First, we use dictionaries, but extract entries automatically and do not manually annotate their type and variability. As a result, our methods rely only on the surface form of the expression taken from the dictionary. Second, we evaluate precision and recall in a more rigorous way, by using an evaluation corpus exhaustively annotated for PIEs. In addition, we do not put any restriction on the syntactic type of the expressions to be extracted, which BIBREF27, BIBREF28, BIBREF25, and BIBREF29 all do. <<</Extracting Idioms from Corpora>>> <<</Related Work>>> <<<Coverage of Idiom Inventories>>> <<<Background>>> Since our goal is developing a dictionary-based system for extracting potentially idiomatic expressions, we need to devise a proper method for evaluating such a system. This is not straightforward, even though the final goal of such a system is simple: it should extract all potentially idiomatic expressions from a corpus and nothing else, regardless of their sense and the form they are used in. The type of system proposed here hence has two aspects that can be evaluated: the dictionary that it is using as a resource for idiomatic expression, and the extractor component that finds idioms in a corpus. The difficulty here is that there is no undisputed and unambiguous definition of what counts as an idiom BIBREF30, as is the case with multiword expressions in general BIBREF12. Of course, a complete set of idiomatic expressions for English (or any other language) is impossible to get due to the broad and ever-changing nature of language. This incompleteness is exacerbated by the ambiguity problem: if we had a clear definition of idiom we could make an attempt of evaluating idiom dictionaries on their accuracy, but it is practically impossible to come up with a definition of idiom that leaves no room for ambiguity. This ambiguity, among others, creates a large grey area between clearly non-idiomatic phrases on the one hand (e.g. buy a house), and clear potentially idiomatic phrases on the other hand (e.g. buy the farm). As a consequence, we cannot empirically evaluate the coverage of the dictionaries. Instead, in this work, we will quantify the divergence between various idiom dictionaries and corpora, with regard to their idiom inventories. If they show large discrepancies, we take that to mean that either there is little agreement on definitions of idiom or the category is so broad that a single resource can only cover a small proportion. Conversely, if there is large agreement, we assume that idiom resources are largely reliable, and that there is consensus around what is, and what is not, an idiomatic expression. We use different idiom resources and assume that the combined set of resources yields an approximation of the true set of idioms in English. A large divergence between the idiom inventories of these resources would then suggest a low recall for a single resource, since many other idioms are present in the other resources. Conversely, if the idiom inventories largely overlap, that indicates that a single resource can already yield decent coverage of idioms in the English language. The results of the dictionary comparisons are in Section SECREF36. <<</Background>>> <<<Selected Idiom Resources (Data and Method)>>> We evaluate the quality of three idiom dictionaries by comparing them to each other and to three idiom corpora. Before we report on the comparison we first describe why we select and how we prepare these resources. We investigate the following six idiom resources: Wiktionary; the Oxford Dictionary of English Idioms (ODEI, BIBREF31); UsingEnglish.com (UE); the Sporleder corpus BIBREF10; the VNC dataset BIBREF9; and the SemEval-2013 Task 5 dataset BIBREF15. These dictionaries were selected because they are available in digital format. Wiktionary and UsingEnglish have the added benefit of being freely available. However, they are both crowdsourced, which means they lack professional editing. In contrast, ODEI is a traditional dictionary, created and edited by lexicographers, but it has the downside of not being freely available. For Wiktionary, we extracted all idioms from the category `English Idioms' from the English version of Wiktionary. We took the titles of all pages containing a dictionary entry and considered these idioms. Since we focus on multiword idiomatic expressions, we filtered out all single-word entries in this category. More specifically, since Wiktionary is a constantly changing resource, we used the 8,482 idioms retrieved on 10-03-2017, 15:30. We used a similar extraction method for UE, a web page containing freely available resources for ESL learners, including a list of idioms. We extracted all idioms which have publicly available definitions, which numbered 3,727 on 10-03-2017, 15:30. Again, single-word entries and duplicates were filtered out. Concerning ODEI, all idioms from the e-book version were extracted, amounting to 5,911 idioms scraped on 13-03-2017, 10:34. Here we performed an extra processing step to expand idioms containing content in parentheses, such as a tough (or hard) nut (to crack). Using a set of simple expansion rules and some hand-crafted exceptions, we automatically generated all variants for this idiom, with good, but not perfect accuracy. For the example above, the generated variants are: {a tough nut, a tough nut to crack, a hard nut, a hard nut to crack}. The idioms in the VNC dataset are in the form verb_noun, e.g. blow_top, so they were manually expanded to a regular dictionary form, e.g. blow one's top before comparison. <<</Selected Idiom Resources (Data and Method)>>> <<<Method>>> In many cases, using simple string-match to check overlap in idioms does not work, as exact comparison of idioms misses equivalent idioms that differ only slightly in dictionary form. Differences between resources are caused by, for example: inflectional variation (crossing the Rubicon — cross the Rubicon); variation in scope (as easy as ABC — easy as ABC); determiner variation (put the damper on — put a damper on); spelling variation (mind your p's and q's — mind your ps and qs); order variation (call off the dogs — call the dogs off); and different conventions for placeholder words (recharge your batteries — recharge one's batteries), where both your and one's can generalise to any possessive personal pronoun. These minor variations do not fundamentally change the nature of the idiom, and we should count these types of variation as belonging to the same idiom (see also BIBREF32, who devise a measure to quantify different types of variation allowed by specific MWEs). So, to get a good estimate of the true overlap between idiom resources, these variations need to be accounted for, which we do in our flexible matching approach. There is one other case of variation not listed above, namely lexical variation (e.g. rub someone up the wrong way - stroke someone the wrong way). We do not abstract over this, since we consider lexical variation to be a more fundamental change to the nature of the idiom. That is, a lexical variant is an indicator of the coverage of the dictionary, where the other variations are due to different stylistic conventions and do not indicate actual coverage. In addition, it is easy to abstract over the other types of variation in an NLP application, but this is not the case for lexical variation. The overlap counts are estimated by abstracting over all variations except lexical variation in a semi-automatic manner, using heuristics and manual checking. Potentially overlapping idioms are selected using the following set of heuristics: whether an idiom from one resource is a substring (including gaps) of an idiom in the other resource, whether the words of an idiom form a subset of the words of an idiom in the other resource, and whether there is an idiom in the other resource which has a Levenshtein ratio of over 0.8. The Levenshtein ratio is an indicator of the Levenshtein distance between the two idioms relative to their length. These potential matches are then judged manually on whether they are really forms of the same idiom or not. <<</Method>>> <<<Results>>> The results of using exact string matching to quantify the overlap between the dictionaries is illustrated in Figure FIGREF37. Overlap between the three dictionaries is low. A possible explanation for this lies with the different nature of the dictionaries. Oxford is a traditional dictionary, created and edited by professional lexicographers, whereas Wiktionary is a crowdsourced dictionary open to everyone, and UsingEnglish is similar, but focused on ESL-learners. It is likely that these different origins result in different idiom inventories. Similarly, we would expect that the overlap between a pair of traditional dictionaries, such as the ODEI and the Penguin Dictionary of English Idioms BIBREF33 would be significantly higher. It should also be noted, however, that comparisons between more similar dictionaries also found relatively little overlap (BIBREF34; BIBREF35). A counterpoint is provided by BIBREF36, who quantifies coverage of verb-particle constructions in three different dictionaries and finds large overlap – perhaps because verb-particle are a more restricted class. As noted previously, using exact string matching is a very limited approach to calculating overlap. Therefore, we used heuristics and manual checking to get more precise numbers, as shown in Table TABREF39, which also includes the three corpora in addition to the three dictionaries. As the manual checking only involved judging similar idioms found in pairs of resources, we cannot calculate three-way overlap as in Figure FIGREF37. The counts of the pair-wise overlap between dictionaries differ significantly between the two methods, which serves to illustrate the limitations of using only exact string matching and the necessity of using more advanced methods and manual effort. Several insights can be gained from the data in Table TABREF39. The relation between Wiktionary and the SemEval corpus is obvious (cf. Section SECREF12), given the 96.92% coverage. For the other dictionary-corpus pairs, the coverage increases proportionally with the size of the dictionary, except in the case of UsingEnglish and the Sporleder corpus. The proportional increase indicates no clear qualitative differences between the dictionaries, i.e. one does not have a significantly higher percentage of non-idioms than the other, when compared to the corpora. Generally, overlap between dictionaries and corpora is low: the two biggest, ODEI and Wiktionary have only around 30% overlap, while the dictionaries also cover no more than approximately 70% of the idioms used in the various corpora. Overlap between the three corpora is also extremely low, at below 5%. This is unsurprising, since a new dataset is more interesting and useful when it covers a different set of idioms than used in an existing dataset, and thus is likely constructed with this goal in mind. <<</Results>>> <<</Coverage of Idiom Inventories>>> <<<Corpus Annotation>>> In order to evaluate the PIE extraction methods developed in this work (Section SECREF6), we exhaustively annotate an evaluation corpus with all instances of a pre-defined set of PIEs. As part of this, we come up with a workable definition of PIEs, and measure the reliability of PIE annotation by inter-annotator agreement. Assuming that we have a set of idioms, the main problem of defining what is and what is not a potentially idiomatic expression is caused by variation. In principle, potentially idiomatic expression is an instance of a phrase that, when seen without context, could have either an idiomatic or a literal meaning. This is clearest for the dictionary form of the idiom, as in Example SECREF5. Literal uses generally allow all kinds of variation, but not all of these variations allow a figurative interpretation, e.g. Example SECREF5. However, how much variation an idiom can undergo while retaining its figurative interpretation is different for each expression, and judgements of this might vary from one speaker to the other. An example of this is spill the bean, a variant of spill the beans, in Example SECREF5 judged by BIBREF21 as being highly questionable. However, even here a corpus example can be found containing the same variant used in a figurative sense (Example SECREF5). As such, we assume that we cannot know a priori which variants of an expression allow a figurative reading, and are thus a potentially idiomatic expression. Therefore we consider every possible morpho-syntactic variation of an idiom a PIE, regardless of whether it actually allows a figurative reading. We believe the boundaries of this variation can only be determined based on corpus evidence, and even then they are likely variable. Note that a similar question is tackled by BIBREF26, when they establish the boundary between a `literal reading of a VMWE' and a `coincidental co-occurrence'. BIBREF26's answer is similar to ours, in that they count something as a literal reading of a VMWE if it `the same or equivalent dependencies hold between [the expression]'s components as in its canonical form'. . John kicked the bucket last night. . * The bucket, John kicked last night. . ?? Azin spilled the bean. (from BIBREF21) . Alba reveals Fantastic Four 2 details The Invisible Woman actress spills the bean on super sequel (from ukWaC) <<<Evaluating the Extraction Methods>>> Evaluating the extraction methods is easier than evaluating dictionary coverage, since the goal of the extraction component is more clearly delimited: given a set of PIEs from one or more dictionaries, extract all occurrences of those PIEs from a corpus. Thus, rather than dealing with the undefined set of all PIEs, we can work with a clearly defined and finite set of PIEs from a dictionary. Because we have a clearly defined set of PIEs, we can exhaustively annotate a corpus for PIEs, and use that annotated corpus for automatic evaluation of extraction methods using recall and precision. This allows us to facilitate and speed up annotation by pre-extracting sentences possibly containing a PIE. After the corpus is annotated, the precision and recall can be easily estimated by comparing the extracted PIE instances to those marked in the corpus. The details of the corpus selection, dictionary selection, extraction heuristic and annotation procedure are presented in Section SECREF46, and the details and results of the various extraction methods are presented in Section SECREF6. <<</Evaluating the Extraction Methods>>> <<<Base Corpus and Idiom Selection>>> As a base corpus, we use the XML version of the British National Corpus BIBREF37, because of its size, variety, and wide availability. The BNC is pre-segmented into s-units, which we take to be sentences, w-units, which we take to be words, and c-units, punctuation. We then extract the text of all w-units and c-units. We keep the sentence segmentation, resulting in a set of plain text sentences. All sentences are included, except for sentences containing <gap> elements, which are filtered out. These <gap> elements indicate places where material from the original has been left out, e.g. for anonymisation purposes. Since this can result in incomplete sentences that cannot be parsed correctly, we filter out sentences containing these gaps. We use only the written part of the BNC. From this, we extract a set of documents with the aim of having as much genre variation as possible. To achieve this, we select the first document in each genre, as defined by the classCode attribute (e.g. nonAc, commerce, letters). The resulting set of 46 documents makes up our base corpus. Note that these documents vary greatly in size, which means the resulting corpus is varied, but not balanced in terms of size (Table TABREF43). The documents are split across a development and test set, as specified at the end of Section SECREF46. We exclude documents with IDs starting with A0 from all annotation and evaluation procedures, as these were used during development of the extraction tool and annotation guidelines. As for the set of potentially idiomatic expressions, we use the intersection of the three dictionaries, Wiktionary, Oxford, and UsingEnglish. Based on the assumption that, if all three resources include a certain idiom, it must unquestionably be an idiom, we choose the intersection (also see Figure FIGREF37). This serves to exclude questionable entries, like at all, which is in Wiktionary. The final set of idioms used for these experiments consists of 591 different multiword expressions. Although we aim for wide coverage, this is a necessary trade-off to ensure quality. At the same time, it leaves us with a set of idiom types that is approximately ten times larger than present in existing corpora. The set of 591 idioms includes idioms with a large variety of syntactic patterns, of which the most frequent ones are shown in Table TABREF44. The statistics show that the types most prevalent in existing corpora, verb-noun and preposition-noun combinations, are indeed the most frequent ones, but that there is a sizeable minority of types that do not fall into those categories, including coordinated adjectives, coordinated nouns, and nouns with prepositional phrases. This serves to emphasise the necessity of not restricting corpora to a small set of syntactic patterns. <<</Base Corpus and Idiom Selection>>> <<<Extraction of PIE Candidates>>> To annotate the corpus completely manually would require annotators to read the whole corpus, and cross-reference each sentence to a list of almost 600 PIEs, to check whether one of those PIEs occurs in a sentence. We do not consider this a feasible annotation settings, due to both the difficulty of recognising literal usages of idioms and the time cost needed to find enough PIEs, given their low overall frequency. As such, we use a pre-extraction step to present candidates for annotation to the human annotators. Given the corpus and the set of PIEs, we heuristically extract the PIE candidates as follows: given an idiomatic expression, extract every sentence which contains all the defining words of the idiom, in any form. This ensures that all possibly matching sentences get extracted, while greatly pruning the amount of sentences for annotators to look at. In addition, it allows us to present the heuristically matched PIE type and corresponding words to the annotators, which makes it much easier to judge whether something is a PIE or not. This also means that annotators never have to go through the full list of PIEs during the annotation process. Initially, the heuristic simply extracted any sentence containing all the required words, where a word is any of the inflectional variants of the words in the PIE, except for determiners and punctuation. This method produced large amounts of noise, that is, a set of PIE candidates with only a very low percentage of actual PIEs. This was caused by the presence of some highly frequent PIEs with very little defining lexical content, such as on the make, and in the running. For example, with the original method, every sentence containing the preposition on, and any inflectional form of the verb make was extracted, resulting in a huge number of non-PIE candidates. To limit the amount of noise, two restrictions were imposed. The first restrictions disallows word order variation for PIEs which do not contain a verb. The rationale behind this is that word order variation is only possible with PIEs like spill the beans (e.g. the beans were spilled), and not with PIEs like in the running (*the running in??). The second restriction is that we limit the number of words that can be inserted between the words of a PIE, but only for PIEs like on the make, and in the running, i.e. PIEs which only contain prepositions, determiners and a single noun. The number of intervening words was limited to three tokens, allowing for some variation, as in Example SECREF45, but preventing sentences like Example SECREF45 from being extracted. This restriction could result in the loss of some PIE candidates with a large number of intervening words. However, the savings in annotation time clearly outweigh the small loss in recall in this situation. . Either at New Year or before July you can anticipate a change in the everyday running of your life. (in the running - BNC - document CBC - sentence 458) . [..] if [he] hung around near the goal or in the box for that matter instead of running all over the show [..] (in the running - BNC - document J1C - sentence 1341) <<</Extraction of PIE Candidates>>> <<<Annotation Procedure>>> The manual annotation procedure consists of three different phases (pilot, double annotation, single annotation), followed by an adjudication step to resolve conflicting annotations. Two things are annotated: whether something is a PIE or not, and if it is a PIE, which sense the PIE is used in. In the first phase (0-100-*), we randomly select hundred of the 2,239 PIE candidates which are then annotated by three annotators. All annotators have a good command of English, are computational linguists, and familiar with the subject. The annotators include the first and last author of this paper. The annotators were provided with a short set of guidelines, of which the main rule-of-thumb for labelling a phrase as a PIE is as follows: any phrase is a PIE when it contains all the words, with the same part-of-speech, and in the same grammatical relations as in the dictionary form of the PIE, ignoring determiners. For sense annotation, annotators were to mark a PIE as idiomatic if it had a sense listed in one of the idiom dictionaries, and as literal if it had a meaning that is a regular composition of its component words. For cases which were undecidable due to lack of context, the ?-label was used. The other-label was used as a container label for all cases in which neither the literal or idiomatic sense was correct (e.g. meta-linguistic uses and embeddings in metaphorical frames, see also Section SECREF10). The first phase of annotation serves to bring to light any inconsistencies between annotators and fill in any gaps in the annotation guidelines. The resulting annotations already show a reasonably high agreement of 0.74 Fleiss' Kappa. Table TABREF48 shows annotation details and agreement statistics for all three phases. The annotation tasks suffixed by -PIE indicate agreement on PIE/non-PIE annotation and the tasks suffixed by -sense indicate agreement on sense annotation for PIEs. In the second phase of annotation (100-600-* & 600-1100-*), another 1000 of the 2239 PIE candidates are selected to be annotated by two pairs of annotators. This shows very high agreement, as shown in Table TABREF48. This is probably due to the improvement in guidelines and the discussion following the pilot round of annotation. The exception to this are the somewhat lower scores for the 600-1100-sense annotation task. Adjudication revealed that this is due almost exclusively because of a different interpretation of the literal and idiomatic senses of a single PIE type: on the ground. Excluding this PIE type, Fleiss' Kappa increases from 0.63 to 0.77. Because of the high agreement on PIE annotation, we deem it sufficient for the remainder (1108 candidates) to be annotated by only the primary annotator in the third phase of annotation (1100-2239-*). The reliability of the single annotation can be checked by comparing the distribution of labels to the multi-annotated parts. This shows that it falls clearly within the ranges of the other parts, both in the proportion of PIEs and idiomatic senses (see Table TABREF49). The single-annotated part has 49.0% PIEs, which is only 4 percentage points above the 44.7% PIEs in the multi-annotated parts. The proportion of idioms is just 2 percentage points higher, with 55.9% versus 53.9.%. Although inter-annotator agreement was high, there was still a significant number of cases in the triple and double annotated PIE candidate sets where not all annotators agreed. These cases were adjudicated through discussion by all annotators, until they were in agreement. In addition, all PIE candidates which initially received the ?-label (unclear or undecidable) for sense or PIE were resolved in the same manner. In the adjudication procedure, annotators were provided with additional context on each side of the idiom, in contrast to the single sentence provided during the initial annotation. The main reason to do adjudication, rather than simply discarding all candidates for which there was disagreement, was that we expected exactly those cases for which there are conflicting annotations to be the most interesting ones, since having non-standard properties would cause the annotations to diverge. Examples of such interesting non-standard cases are at sea as part of a larger satirical frame in Example SECREF46 and cut the mustard in Example SECREF46 where it is used in a headline as wordplay on a Cluedo character. . The bovine heroine has connections with Cowpeace International, and deals with a huge treacle slick at sea. (at sea - BNC - document CBC - sentence 13550) . Why not cut the Mustard? [..] WADDINGTON Games's proposal to axe Reverend Green from the board game Cluedo is a bad one. (cut the mustard - BNC - document CBC - sentence 14548) We split the corpus at the document level. The corpus consists of 45 documents from the BNC, and we split it in such a way that the development set has 1,112 candidates across 22 documents and the test set has 1,127 candidates from 23 documents. Note that this means that the development and test set contain different genres. This ensures that we do not optimise our systems on genre-specific aspects of the data. <<</Annotation Procedure>>> <<</Corpus Annotation>>> <<<Dictionary-based PIE Extraction>>> We propose and implement four different extraction methods, of differing complexities: exact string match, fuzzy string match, inflectional string match, and parser-based extraction. Because of the absence of existing work on this task, we compare these methods to each other, where the more basic methods function as baselines. More complex methods serve to shine light on the difficulty of the PIE extraction task; if simple methods already work sufficiently well, the task is not as hard as expected, and vice versa. Below, each of the extraction methods is presented and discussed in detail. <<<String-based Extraction Methods>>> <<<Exact String Match>>> This is, very simply, extracting all instances of the exact dictionary form of the PIE, from the tokenized text of the corpus. Word boundaries are taken into account, so at sea does not match `that seawater'. As a result, all inflectional and other variants of the PIE are ignored. <<</Exact String Match>>> <<<Fuzzy String Match>>> Fuzzy string match is a rough way of dealing with morphological inflection of the words in a PIE. We match all words in the PIE, taking into account word boundaries, and allow for up to 3 additional letters at the end of each word. These 3 additional characters serve to cover inflectional suffixes. <<</Fuzzy String Match>>> <<<Inflectional String Match>>> In inflectional string match, we aim to match all inflected variations of a PIE. This is done by generating all morphological variants of the words in a PIE, generating all combinations of those words, and then using exact string match as described earlier. Generating morphological variations consists of three steps: part-of-speech tagging, morphological analysis, and morphological reinflection. Since inflectional variation only applies to verbs and nouns, we use the Spacy part-of-speech tagger to detect the verbs and nouns. Then, we apply the morphological analyser morpha to get the base, uninflected form of the word, and then use the morphological generation tool morphg to get all possible inflections of the word. Both tools are part of the Morph morphological processing suite BIBREF38. Note that the Morph tools depend on the part-of-speech tag in the input, so that a wrong PoS may lead to an incorrect set of morphological variants. For a PIE like spill the beans, this results in the following set of variants: $\lbrace $spill the bean, spills the bean, spilled the bean, spilling the bean, spill the beans, spills the beans, spilled the beans, spilling the beans$\rbrace $. Since we generate up to 2 variants for each noun, and up to 4 variants for each verb, the number of variants for PIEs containing multiple verbs and nouns can get quite large. On average, 8 additional variants are generated for each potentially idiomatic expression. <<</Inflectional String Match>>> <<<Additional Steps>>> For all string match-based methods, ways to improve performance are implemented, to make them as competitive as possible. Rather than doing exact string matching, we also allow words to be separated by something other than spaces, e.g. nuts-and-bolts for nuts and bolts. Additionally, there is an option to take into account case distinctions. With the case-sensitive option, case is preserved in the idiom lists, e.g. coals to Newcastle, and the string matching is done in a case-sensitive manner. This increases precision, e.g. by avoiding PIEs as part of proper names, but also comes at a cost of recall, e.g. for sentence-initial PIEs. Thirdly, there is the option to allow for a certain number of intervening words between each pair of words in the PIE. This should improve recall, at the cost of precision. For example, this would yield the true positive make a huge mountain out of a molehill for make a mountain out of a molehill, but also false positives like have a smoke and go for have a go. A third shared property of the string-based methods is the processing of placeholders in PIEs. PIEs containing possessive pronoun placeholders, such as one's and someone's are expanded. That is, we remove the original PIE, and add copies of the PIE where the placeholder is replaced by one of the possessive personal pronouns. For example, a thorn in someone's side is replaced by a thorn in {my, your, his, ...} side. In the case of someone's, we also add a wildcard for any possessively used word, i.e. a thorn in —'s side, to match e.g. a thorn in Google's side. Similarly, we make sure that PIE entries containing —, such as the mother of all —, will match any word for — during extraction. We do the same for someone, for which we substitute objective pronouns. For one, this is not possible, since it is too hard to distinguish from the one used as a number. <<</Additional Steps>>> <<</String-based Extraction Methods>>> <<<Parser-Based Extraction Methods>>> Parser-based extraction is potentially the widest-coverage extraction method, with the capacity to extract both morphological and syntactic variants of the PIE. This should be robust against the most common modifications of the PIE, e.g. through word insertions (spill all the beans), passivisation (the beans were spilled), and abstract over articles (spill beans). In this method, PIEs are extracted using the assumption that any sentence which contains the lemmata of the words in the PIE, in the same dependency relations as in the PIE, contains an instance of the PIE type in question. More concretely, this means that the parse of the sentence should contain the parse tree of the PIE as a subtree. This is illustrated in Figure FIGREF57, which shows the parse tree for the PIE lose the plot, parsed without context. Note that this is a subtree of the parse tree for the sentence `you might just lose the plot completely', which is shown in Figure FIGREF58. Since the sentence parse contains the parse of the PIE, we can conclude that the sentence contains an instance of that PIE and extract the span of the PIE instance. All PIEs are parsed in isolation, based on the assumption that all PIEs can be parsed, since they are almost always well-formed phrases. However, not all PIEs will be parsed correctly, especially since there is no context to resolve ambiguity. Errors tend to occur at the part-of-speech level, where, for example, verb-object combinations like jump ship and touch wood are erroneously tagged as noun-noun compounds. An analysis of the impact of parser error on PIE extraction performance is presented in Section SECREF73. Initially, we use the Spacy parser for parsing both the PIEs and the sentences. Next, the sentence is parsed, and the lemma of the top node of the parsed PIE is matched against the lemmata of the sentence parse. If a match is found, the parse tree of the PIE is matched against the subtree of the matching sentence parse node. If the whole PIE parse tree matches, the span ranging from the first PIE token to the last is extracted. This span can thus include words that are not directly part of the PIE's dictionary form, in order to account for insertions like ships were jumped for jump ship, or have a big heart for have a heart. During the matching, articles (a/an/the) are ignored, and passivisation is accounted for with a special rule. In addition, a number of special cases are dealt with. These are PIEs containing someone('s), something('s), one's, or —. These words are used in PIEs as placeholders for a generic possessor (someone's/something's/one's), generic object (someone/something), or any word of the right PoS (—). For someone's, and something's, we match any possessive pronoun, or (proper) noun + possessive marker. For one's, only possessive pronouns are matched, since this is a placeholder for reflexive possessors. For someone and something, any non-possessive pronoun or (proper) noun is matched. For — wildcards, any word can be matched, as long as it has the right relation to the right head. An additional challenge with these wildcards is that PIEs containing them cannot be parsed, e.g. too — for words is not parseable. This is dealt with by substituting the — by a PoS-ambiguous word, such as fine, or back. Two optional features are added to the parser-based method with the goal of making it more robust to parser errors: generalising over dependency relation labels, and generalising over dependency relation direction. We expect this to increase recall at the cost of precision. In the first no labels setting, we match parts of the parse tree which have the same head lemma and the same dependent lemma, regardless of the relation label. An example of this is Figure FIGREF60, which has the wrong relation label between up and ante. If labels are ignored, however, we can still extract the PIE instance in Figure FIGREF61, which has the correct label. In the no directionality setting, relation labels are also ignored, and in addition the directionality of the relation is ignored, that is, we allow for the reversal of heads and dependents. This benefits performance in a case like Figure FIGREF62, which has stock as the head of laughing in a compound relation, whereas the parse of the PIE (Figure FIGREF63) has laughing as the head of stock in a dobj relation. Note that similar settings were implemented by BIBREF26, who detect literal uses of VMWEs using a parser-based method with either full labelled dependencies, unlabelled dependencies, or directionless unlabelled dependencies (which they call BagOfDeps). They find that recall increases when less restrictions on the dependencies are used, but that this does not hurt precision, as we would expect. However, we cannot draw too many conclusions from these results due to the small size of their evaluation set, which consists of just 72 literal VMWEs in total. <<<In-Context Parsing>>> Since the parser-based method parses PIEs without any context, it often finds an incorrect parse, as for jump ship in Figure FIGREF65. As such, we add an option to the method that aims to increase the number of correct parses by parsing the PIE within context, that is, within a sentence. This can greatly help to disambiguate the parse, as in Figure FIGREF66. If the number of correct parses goes up, the recall of the extraction method should also increase. Naturally, it can also be the case that a PIE is parsed correctly without context, and incorrectly with context. However, we expect the gains to outweigh the losses. The challenge here is thus to collect example sentences containing the PIE. Since the whole point of this work is to extract PIEs from raw text, this provides a catch-22-like situation: we need to extract a sentence containing a PIE in order to extract sentences containing a PIE. The workaround for this problem is to use the exact string matching method with the dictionary form of the PIE and a very large plain text corpus to gather example sentences. By only considering the exact dictionary form we both simplify the finding of example sentences and the extraction of the PIE's parse from the sentence parse. In case multiple example sentences are found, the shortest sentence is selected, since we assume it is easiest to parse. This is also the reason we make use of very large corpora, to increase the likelihood of finding a short, simple sentence. The example sentence extraction method is modified in such a way that sentences where the PIE is used meta-linguistically in quotes, e.g. “the well-known English idiom `to spill the beans' has no equivalents in other languages”, are excluded, since they do not provide a natural context for parsing. When no example sentence can be found in the corpus, we back-off to parsing the PIE without context. After a parse has been found for each PIE (i.e. with or without context), the method proceeds identically to the regular parser-based method. We make use of the combination of two large corpora for the extraction of example sentences: the English Wikipedia, and ukWaC BIBREF17. For the Wikipedia corpus, we use a dump (13-01-2016) of the English-language Wikipedia, and remove all Wikipedia markup. This is done using WikiExtractor. The resulting files still contain some mark-up, which is removed heuristically. The resulting corpus contains mostly clean, raw, untokenized text, numbering approximately 1.78 billion tokens. As for ukWaC, all XML-markup was removed, and the corpus is converted to a one-sentence-per-line format. UkWaC is tokenized, which makes it difficult for a simple string match method to find PIEs containing punctuation, for example day in, day out. Therefore, all spaces before commas, apostrophes, and sentence-final punctuation are removed. The resulting corpus contains approximately 2.05 billion tokens, making for a total of 3.83 billion tokens in the combined ukWaC and Wikipedia corpus. <<</In-Context Parsing>>> <<</Parser-Based Extraction Methods>>> <<<Analysis>>> Broadly speaking, the PIE extraction systems presented above perform in line with expectations. It is nevertheless useful to see where the best-performing system misses out, and where improvements like in-context parsing help performance. We analyse the shortcomings of the in-context parser-based system by looking at the false positives and false negatives on the development set. We consider the output of the system with best overall performance, since it will provide the clearest picture. The system extracts 529 PIEs in total, of which 54 are false extractions (false positives), and it misses 69 annotated PIE instances (false negatives). Most false positives stem from the system's failure to capture nuances of PIE annotation. This includes cases where PIEs contain, or are part of, proper nouns (Example SECREF73), PIEs that are part of coordination constructions (Example SECREF73), and incorrect attachments (Example SECREF73). Among these errors, sentences containing proper nouns are an especially frequent problem. . Drama series include [..] airline security thrills in Cleared For Takeoff and Head Over Heels [..] (in the clear - BNC - document CBC - sentence 5177) . They prefer silk, satin or lace underwear in tasteful black or ivory. (in the black - BNC - document CBC - sentence 14673) . [..] `I saw this chap make something out of an ordinary piece of wood — he fashioned it into an exquisite work of art.' (out of the woods - BNC - document ABV - sentence 1300) The main cause of false negatives are errors made by the parser. In order to correctly extract a PIE from a sentence, both the PIE and the sentence have to be parsed correctly, or at least parsed in the same way. This means a missed extraction can be caused by a wrong parse for the PIE or a wrong parse for the sentence. These two error types form the largest class of false negatives. Since some PIE types are rather frequent, a wrong parse for a single PIE type can potentially lead to a large number of missed extractions. It is not surprising that the parser makes many mistakes, since idioms often have unusual syntactic constructions (e.g. come a cropper) and contain words where default part-of-speech tags lead to the wrong interpretation (e.g. round is a preposition in round the bend, not a noun or adjective). This is especially true when idioms are parsed without context, and hence, where in-context parsing provides the largest benefit: the number of PIEs which are parsed incorrectly drops, which leads to F1-scores on those types going from 0% to almost 100% (e.g. in light of and ring a bell). Since parser errors are the main contributor to false negatives, hurting recall, we can observe that parsing idioms in context serves to benefit only recall, by 7 percentage points, at only a small loss in precision. We find that adding context mainly helps for parsing expressions which are structurally relatively simple, but still ambiguous, such as rub shoulders, laughing stock, and round the bend. Compare, for example, the parse trees for laughing stock in isolation and within the extracted context sentence in Figures FIGREF74 and FIGREF75. When parsed in isolation, the relation between the two words is incorrectly labelled as a compound relation, whereas in context it is correctly labelled as a direct object relation. Note however, that for the most difficult PIEs, embedding them in a context does solve the parsing problem: a syntactically odd phrase is hard to phrase (e.g. for the time being), and a syntactically odd phrase in a sentence makes for a syntactically odd sentence that is still hard to parse (e.g. `London for the time being had been abandoned.'). Finding example sentences turned out not to be a problem, since appropriate sentences were found for 559 of 591 PIE types. An alternative method for reducing parser error is to use a different, better parser. The Spacy parser was mainly chosen for implementation convenience and speed, and there are parsers which have better performance, as measured on established parsing benchmarks. To investigate the effectiveness of this method, we used the Stanford Neural Dependency Parser BIBREF39 to extract PIEs in the regular parsing, in-context parsing and the no labels settings. In all cases, using the Stanford parser yielded worse extraction performance than the Spacy parser. A possible explanation for why a supposedly better parser performs worse here is that parsers are optimised and trained to do well on established benchmarks, which consist of complete sentences, often from news texts. This does not necessarily correlate with parsing performance on short (sentences containing) idiomatic phrases. As such, we cannot assume that better overall parsing performance implies PIE extraction performance. It should be noted that, when assessing the quality of PIE extraction performance, the parser-based methods are sensitive to specific PIE types. That is, if a single PIE type is parsed incorrectly, then it is highly probable that all instances of that type are missed. If this type is also highly frequent, this means that a small change in actual performance yields a large change in evaluation scores. Our goal is to have a PIE extraction system that is robust across all PIE types, and thus the current evaluation setting does not align exactly with our aim. Splitting out performance per PIE type reveals whether there is indeed a large variance in performance across types. Table TABREF76 shows the 25 most frequent PIE types in the corpus, and the performance of the in-context-parsing-based system on each. Except two cases (in the black and round the bend), we see that the performance is in the 80–100% range, even showing perfect performance on the majority of types. For none of the types do we see low precision paired with high recall, which indicates that the parser never matches a highly frequent non-PIE phrase. For the system with the no labels and no-directionality options (per-type numbers not shown here), however, this does occur. For example, ignoring the labels for the parse of the PIE have a go leads to the erroneous matching of many sentences containing a form of have to go, which is highly frequent, thus leading to a large drop in precision. Although performance is stable across the most frequent types, among the less frequent types it is more spotty. This hurts overall performance, and there are potential gains in mitigating the poor performance on these types, such as for the time being. At the same time, the string matching methods show much more stable performance across types, and some of them do so with very high precision. As such, a combination of two such methods could boost performance significantly. If we use a high-precision string match-based method, such as the exact string match variant with a precision of 97.35%, recall could be improved for the wrongly parsed PIE types, without a significant loss of precision. We experiment with two such combinations, by simply taking the union of the sets of extracted idioms of both systems, and filtering out duplicates. Results are shown in Table TABREF77. Both combinations show the expected effect: a clear gain in recall at a minimal loss in precision. Compared to the in-context-parsing-based system, the combination with exact string matching yields a gain in recall of over 6%, and the combination with inflectional string matching yields an even bigger gain of almost 8%, at precision losses of 0.6% and 0.8%, respectively. This indicates that the systems are very much complementary in the PIEs they extract. It also means that, when used in practice, combining inflectional string matching and parse-based extraction is the most reliable configuration. <<</Analysis>>> <<</Dictionary-based PIE Extraction>>> <<<Conclusions and Outlook>>> We present an in-depth study on the automatic extraction of potentially idiomatic expressions based on dictionaries. The purpose of automatic dictionary-based extraction is, on the one hand, to function as a pre-extraction step in the building of a large idiom-annotated corpus. On the other hand, it can function as part of an idiom extraction system when combined with a disambiguation component. In both cases, the ultimate goal is to improve the processing of idiomatic expressions within NLP. This work consists of three parts: a comparative evaluation of the coverage of idiom dictionaries, the annotation of a PIE corpus, and the development and evaluation of several dictionary-based PIE extraction methods. In the first part, we present a study of idiom dictionary coverage, which serves to answer the question of whether a single idiom dictionary, or a combination of dictionaries, can provide good coverage of the set of all English idioms. Based on the comparison of dictionaries to each other, we estimate that the overlap between them is limited, varying from 20% to 55%, which indicates a large divergence between the dictionaries. This can be explained by the fact that idioms vary widely by register, genre, language variety, and time period. In our case, it is also likely that the divergence is caused partly by the gap between crowdsourced dictionaries on the one hand, and a dictionary compiled by professional lexicographers on the other. Given these factors, we can conclude that a single dictionary cannot provide even close to complete coverage of English idioms, but that by combining dictionaries from various sources, significant gains can be made. Since `English idioms' are a diffuse and constantly changing set, we have no gold standard to compare to. As such, we conclude that multiple dictionaries should be used when possible, but that we cannot say any anything definitive on the coverage of dictionaries with regard to the complete set of English idioms (which can only be approximated in the first place). A more comprehensive of idiom resources could be made in the future by using more advanced automatic methods for matching, for example by using BIBREF32's (BIBREF32) method for measuring expression variability. This would make it easier to evaluate a larger number of dictionaries, since no manual effort would be required. In the second part, we experiment with the exhaustive annotation of PIEs in a corpus of documents from the BNC. Using a set of 591 PIE types, much larger and more varied than in existing resources, we show that it is very much possible to establish a working definition of PIE that allows for a large amount of variation, while still being useful for reliable annotation. This resulted in high inter-annotator agreement, ranging from 0.74 to 0.91 Fleiss' Kappa. This means that we can build a resource to evaluate a wide-range idiom extraction system with relatively little effort. The final corpus of PIEs with sense annotations is publicly available consists of 2,239 PIE candidates, of which 1,050 actual PIEs instances, and contains 278 different PIE types. Finally, several methods for the automatic extraction of PIE instances were developed and evaluated on the annotated PIE corpus. We tested methods of differing complexity, from simple string match to dependency parse-based extraction. Comparison of these methods revealed that the more computationally complex method, parser-based extraction, works best. Parser-based extraction is especially effective in capturing a larger amount of variation, but is less precise than string-based methods, mostly because of parser error. The best overall setting of this method, which parses idioms within context, yielded an F1-score of 89.13% on the test set. Parser error can be partly compensated by combining the parse-based method and the inflectional string match method, which yields an F1-score of 92.01% (on the development set). This aligns well with the findings by BIBREF27, who found that combining simpler and more complex methods improves over just using a simple method case for extracting verb-particle constructions. This level of performance means that we can use the tool in corpus building. This greatly reduces the amount of manual extraction effort involved, while still maintaining a high level of recall. We make the source code for the different systems publicly available. Note that, although used here in the context of PIE extraction, our methods are equally applicable to other phrase extraction tasks, for example the extraction of light-verb constructions, metaphoric constructions, collocations, or any other type of multiword expression (cf. BIBREF27, BIBREF25, BIBREF26). Similarly, our method can be conceived as a blueprint and extended to languages other than English. For this to be possible, for any given new language one would need a list of target expressions and, in the case of the parser-based method, a reliable syntactic parser. If this is not the case, the inflectional matching method can be used, which requires only a morphological analyser and generator. Obviously, for languages that are morphologically richer than English, one would need to develop strategies aimed at controlling non-exact matches, so as to enhance recall without sacrificing precision. Previous work on Italian, for example, has shown the feasibility of achieving such balance through controlled pattern matching BIBREF40. Languages that are typologically very different from English would obviously require a dedicated approach for the matching of PIEs in corpora, but the overall principles of extraction, using language-specific tools, could stay the same. Currently, no corpora containing annotation of PIEs exist for languages other than English. However, the PARSEME corpus BIBREF19 already contains idioms (only idiomatic readings) for many languages and would only need annotation of literal usages of idioms to make up a set of PIEs. Paired with the Universal Dependencies project BIBREF41, which increasingly provides annotated data as well as processing tools for an ever growing number of languages, this seems an excellent starting point for creating PIE resources in multiple languages. <<</Conclusions and Outlook>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nNew Terminology: Potentially Idiomatic Expression (PIE)\nRelated Work\nAnnotated Corpora and Annotation Schemes for Idioms\nVNC-Tokens\nGigaword\nIDIX\nSemEval-2013 Task 5b\nGeneral Multiword Expression Corpora\nOverview\nExtracting Idioms from Corpora\nCoverage of Idiom Inventories\nBackground\nSelected Idiom Resources (Data and Method)\nMethod\nResults\nCorpus Annotation\nEvaluating the Extraction Methods\nBase Corpus and Idiom Selection\nExtraction of PIE Candidates\nAnnotation Procedure\nDictionary-based PIE Extraction\nString-based Extraction Methods\nExact String Match\nFuzzy String Match\nInflectional String Match\nAdditional Steps\nParser-Based Extraction Methods\nIn-Context Parsing\nAnalysis\nConclusions and Outlook" ], "type": "outline" }
1910.11235
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Rethinking Exposure Bias In Language Modeling <<<Abstract>>> Exposure bias describes the phenomenon that a language model trained under the teacher forcing schema may perform poorly at the inference stage when its predictions are conditioned on its previous predictions unseen from the training corpus. Recently, several generative adversarial networks (GANs) and reinforcement learning (RL) methods have been introduced to alleviate this problem. Nonetheless, a common issue in RL and GANs training is the sparsity of reward signals. In this paper, we adopt two simple strategies, multi-range reinforcing, and multi-entropy sampling, to amplify and denoise the reward signal. Our model produces an improvement over competing models with regards to BLEU scores and road exam, a new metric we designed to measure the robustness against exposure bias in language models. <<</Abstract>>> <<<Introduction>>> Likelihood-based language models with deep neural networks have been widely adopted to tackle language tasks BIBREF0, BIBREF1, BIBREF2, BIBREF3. By far, one of the most popular training strategies is teacher forcing, which derives from the general maximum likelihood estimation (MLE) principle BIBREF4. Under the teacher forcing schema, a model is trained to make predictions conditioned on ground-truth inputs. Although this strategy enables effective training of large neural networks, it is susceptible to aggravate exposure bias: a model may perform poorly at the inference stage, once its self-generated prefix diverges from the previously learned ground-truth data BIBREF5. A common approach to mitigate this problem is to impose supervision upon the model's own exploration. To this objective, existing literature have introduced REINFORCE BIBREF6 and actor-critic (AC) methods BIBREF7 (including language GANs BIBREF8), which offer direct feedback on a model's self-generated sequences, so the model can later, at the inference stage, deal with previously unseen exploratory paths. However, due to the well-known issue of reward sparseness and the potential noises in the critic's feedback, these methods are reported to risk compromising the generation quality, specifically in terms of precision. In this paper, we adopt two simple strategies, multi-range reinforcing and multi-entropy sampling to overcome the reward sparseness during training. With the tricks applied, our model demonstrates a significant improvement over competing models. In addition, we propose road exam as a new metric to reveal a model's robustness against exposure bias. <<</Introduction>>> <<<Related Works>>> As an early work to address exposure bias, BIBREF5 proposed a curriculum learning approach called scheduled sampling, which gradually replaces the ground-truth tokens with the model's own predictions while training. Later, BIBREF9 criticized this approach for pushing the model towards overfitting onto the corpus distribution based on the position of each token in the sequence, instead of learning about the prefix. In recent RL-inspired works, BIBREF10 built on the REINFORCE algorithm to directly optimize the test-time evaluation metric score. BIBREF11 employed a similar approach by training a critic network to predict the metric score that the actor's generated sequence of tokens would obtain. In both cases, the reliance on a metric to accurately reflect the quality of generated samples becomes a major limitation. Such metrics are often unavailable and difficult to design by nature. In parallel, adversarial training was introduced into language modeling by SeqGAN BIBREF8. This model consists of a generator pre-trained under MLE and a discriminator pre-trained to discern the generator's distribution from the real data. Follow-up works based on SeqGAN alter their training objectives or model architectures to enhance the guidance signal's informativeness. RankGAN replaces the absolute binary reward with a relative ranking score BIBREF12. LeakGAN allows the discriminator to “leak” its internal states to the generator at intermediate steps BIBREF13. BIBREF14 models a reward function using inverse reinforcement learning (IRL). While much progress have been made, we surprisingly observed that SeqGAN BIBREF8 shows more stable results in road exam in Section SECREF20. Therefore, we aim to amplify and denoise the reward signal in a direct and simple fashion. <<</Related Works>>> <<<Model Description>>> Problem Re-Formulation: Actor-Critic methods (ACs) consider language modeling as a generalized Markov Decision Process (MDP) problem, where the actor learns to optimize its policy guided by the critic, while the critic learns to optimize its value function based on the actor's output and external reward information. As BIBREF15 points out, GAN methods can be seen as a special case of AC where the critic aims to distinguish the actor's generation from real data and the actor is optimized in an opposite direction to the critic. Actor-Critic Training: In this work, we use a standard single-layer LSTM as the actor network. The training objective is to maximize the model's expected end rewards with policy gradient BIBREF16: Then, We use a CNN as the critic to predict the expected rewards for current generated prefix: In practice, we perform a Monte-Carlo (MC) search with roll-out policy following BIBREF8 to sample complete sentences starting from each location in a predicted sequence and compute their end rewards. Empirically, we found out that the maximum, instead of average, of rewards in the MC search better represents each token's actor value and yields better results during training. Therefore, we compute the action value by: In RL and GANs training, two major factors behind the unstable performance are the large variance and the update correlation during the sampling process BIBREF17, BIBREF18. We address these problems using the following strategies: Multi-Range Reinforcing: Our idea of multi-range supervision takes inspiration from deeply-supervised nets (DSNs) BIBREF19. Under deep supervision, intermediate layers of a deep neural network have their own training objectives and receive direct supervision simultaneously with the final decision layer. By design, lower layers in a CNN have smaller receptive fields, allowing them to make better use of local patterns. Our “multi-range" modification enables the critic to focus on local n-gram information in the lower layers while attending to global structural information in the higher layers. This is a solution to the high variance problem, as the actor can receive amplified reward with more local information compared to BIBREF8. Multi-Entropy Sampling: Language GANs can be seen an online RL methods, where the actor is updated from data generated by its own policy with strong correlation. Inspired by BIBREF20, we empirically find that altering the entropy of the actor's sample distribution during training is beneficial to the AC network's robust performance. In specific, we alternate the temperature $\tau $ to generate samples under different behavior policies. During the critic's training, the ground-truth sequences are assigned a perfect target value of 1. The samples obtained with $\tau < 1$ are supposed to contain lower entropy and to diverge less from the real data, that they receive a higher target value close to 1. Those obtained with $\tau > 1$ contain higher entropy and more errors that their target values are lower and closer to 0. This mechanism decorrelates updates during sequential sampling by sampling multiple diverse entropy distributions from actor synchronously. <<<Effectiveness of Multi-Range Reinforcing and Multi-Entropy Sampling>>> Table TABREF5 demonstrates an ablation study on the effectiveness of multi-range reinforcing (MR) and multi-entropy sampling (ME). We observe that ME improves $\text{BLEU}_{\text{F5}}$ (precision) significantly while MR further enhances $\text{BLEU}_{\text{F5}}$ (precision) and $\text{BLEU}_{\text{F5}}$ (recall). Detailed explanations of these metrics can be found in Section SECREF4. <<</Effectiveness of Multi-Range Reinforcing and Multi-Entropy Sampling>>> <<</Model Description>>> <<<Model Evaluation>>> <<<Modeling Capacity & Sentence Quality>>> We adopt three variations of BLEU metric from BIBREF14 to reflect precision and recall. $\textbf {BLEU}_{\textbf {F}}$, or forward BLEU, is a metric for precision. It uses the real test dataset as references to calculate how many n-grams in the generated samples can be found in the real data. $\textbf {BLEU}_{\textbf {B}}$, or backward BLEU, is a metric for recall. This metric takes both diversity and quality into computation. A model with severe mode collapse or diverse but incorrect outputs will receive poor scores in $\text{BLEU}_{\text{B}}$. $\textbf {BLEU}_{\textbf {HA}}$ is the harmonic mean of $\text{BLEU}_{\text{F}}$ and $\text{BLEU}_{\text{B}}$, given by: <<</Modeling Capacity & Sentence Quality>>> <<<Exposure Bias Attacks>>> Road Exam is a novel test we propose as a direct evaluation of exposure bias. In this test, a sentence prefix of length $K$, either taken from the training or testing dataset, is fed into the model under assessment to perform a sentence completion task. Thereby, the model is directed onto either a seen or an unseen “road" to begin its generation. Because precision is the primary concern, we set $\tau =0.5$ to sample high-confidence sentences from each model's distribution. We compare $\text{BLEU}_{\text{F}}$ of each model on both seen and unseen completion tasks and over a range of prefix lengths. By definition, a model with exposure bias should perform worse in completing sentences with unfamiliar prefix. The sentence completion quality should decay more drastically as the the unfamiliar prefix grows longer. <<</Exposure Bias Attacks>>> <<</Model Evaluation>>> <<<Experiment>>> <<<Datasets>>> We evaluate on two datasets: EMNLP2017 WMT News and Google-small, a subset of Google One Billion Words . EMNLP2017 WMT News is provided in BIBREF21, a benchmarking platform for text generation models. We split the entire dataset into a training set of 195,010 sentences, a validation set of 83,576 sentences, and a test set of 10,000 sentences. The vocabulary size is 5,254 and the average sentence length is 27. Google-small is sampled and pre-processed from its the Google One Billion Words. It contains a training set of 699,967 sentences, a validation set of 200,000 sentences, and a test set of 99,985 sentences. The vocabulary size is 61,458 and the average sentence length is 29. <<</Datasets>>> <<<Implementation Details>>> <<<Network Architecture:>>> We implement a standard single-layer LSTM as the generator (actor) and a eight-layer CNN as the discriminator (critic). The LSTM has embedding dimension 32 and hidden dimension 256. The CNN consists of 8 layers with filter size 3, where the 3rd, 5th, and 8th layers are directly connected to the output layer for multi-range supervision. Other parameters are consistent with BIBREF21. <<</Network Architecture:>>> <<<Training Settings:>>> Adam optimizer is deployed for both critic and actor with learning rate $10^{-4}$ and $5 \cdot 10^{-3}$ respectively. The target values for the critic network are set to [0, 0.2, 0.4, 0.6, 0.8] for samples generated by the RNN with softmax temperatures [0.5, 0.75, 1.0, 1.25, 1.5]. <<</Training Settings:>>> <<</Implementation Details>>> <<<Discussion>>> Table TABREF9 and Table TABREF10 compare models on EMNLP2017 WMT News and Google-small. Our model outperforms the others in $\text{BLEU}_{\text{F5}}$, $\text{BLEU}_{\text{B5}}$, and $\text{BLEU}_{\text{HA5}}$, indicating a high diversity and quality in its sample distribution. It is noteworthy that, LeakGAN and our model are the only two models to demonstrate improvements on $\text{BLEU}_{\text{B5}}$ over the teacher forcing baseline. The distinctive increment in recall indicates less mode collapse, which is a common problem in language GANs and ACs. Figure FIGREF16 demonstrates the road exam results on EMWT News. All models decrease in sampling precision (reflected via $\text{BLEU}_{\text{F4}}$) as the fed-in prefix length ($K$) increases, but the effect is stronger on the unseen test data, revealing the existence of exposure bias. Nonetheless, our model trained under ME and MR yields the best sentence quality and a relatively moderate performance decline. Although TF and SS demonstrate higher $\text{BLEU}_{\text{F5}}$ performance with shorter prefixes, their sentence qualities drop drastically on the test dataset with longer prefixes. On the other hand, GANs begin with lower $\text{BLEU}_{\text{F4}}$ precision scores but demonstrate less performance decay as the prefix grows longer and gradually out-perform TF. This robustness against unseen prefixes exhibits that supervision from a learned critic can boost a model's stability in completing unseen sequences. The better generative quality in TF and the stronger robustness against exposure bias in GANs are two different objectives in language modeling, but they can be pursued at the same time. Our model's improvement in both perspectives exhibit one possibility to achieve the goal. <<</Discussion>>> <<</Experiment>>> <<<Conclusion>>> We have presented multi-range reinforcing and multi-entropy sampling as two training strategies built upon deeply supervised nets BIBREF19 and multi-entropy samplingBIBREF20. The two easy-to-implement strategies help alleviate the reward sparseness in RL training and tackle the exposure bias problem. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nRelated Works\nModel Description\nEffectiveness of Multi-Range Reinforcing and Multi-Entropy Sampling\nModel Evaluation\nModeling Capacity & Sentence Quality\nExposure Bias Attacks\nExperiment\nDatasets\nImplementation Details\nNetwork Architecture:\nTraining Settings:\nDiscussion\nConclusion" ], "type": "outline" }
1909.00107
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Behavior Gated Language Models <<<Abstract>>> Most current language modeling techniques only exploit co-occurrence, semantic and syntactic information from the sequence of words. However, a range of information such as the state of the speaker and dynamics of the interaction might be useful. In this work we derive motivation from psycholinguistics and propose the addition of behavioral information into the context of language modeling. We propose the augmentation of language models with an additional module which analyzes the behavioral state of the current context. This behavioral information is used to gate the outputs of the language model before the final word prediction output. We show that the addition of behavioral context in language models achieves lower perplexities on behavior-rich datasets. We also confirm the validity of the proposed models on a variety of model architectures and improve on previous state-of-the-art models with generic domain Penn Treebank Corpus. <<</Abstract>>> <<<Introduction>>> Recurrent neural network language models (RNNLM) can theoretically model the word history over an arbitrarily long length of time and thus have been shown to perform better than traditional n-gram models BIBREF0. Recent prior work has continuously improved the performance of RNNLMs through hyper-parameter tuning, training optimization methods, and development of new network architectures BIBREF1, BIBREF2, BIBREF3, BIBREF4. On the other hand, many work have proposed the use of domain knowledge and additional information such as topics or parts-of-speech to improve language models. While syntactic tendencies can be inferred from a few preceding words, semantic coherence may require longer context and high level understanding of natural language, both of which are difficult to learn through purely statistical methods. This problem can be overcome by exploiting external information to capture long-range semantic dependencies. One common way of achieving this is by incorporating part-of-speech (POS) tags into the RNNLM as an additional feature to predict the next word BIBREF5, BIBREF6. Other useful linguistic features include conversation-type, which was shown to improve language modeling when combined with POS tags BIBREF7. Further improvements were achieved through the addition of socio-situational setting information and other linguistic features such as lemmas and topic BIBREF8. The use of topic information to provide semantic context to language models has also been studied extensively BIBREF9, BIBREF10, BIBREF11, BIBREF12. Topic models are useful for extracting high level semantic structure via latent topics which can aid in better modeling of longer documents. Recently, however, empirical studies involving investigation of different network architectures, hyper-parameter tuning, and optimization techniques have yielded better performance than the addition of contextual information BIBREF13, BIBREF14. In contrast to the majority of work that focus on improving the neural network aspects of RNNLM, we introduce psycholinguistic signals along with linguistic units to improve the fundamental language model. In this work, we utilize behavioral information embedded in the language to aid the language model. We hypothesize that different psychological behavior states incite differences in the use of language BIBREF15, BIBREF16, and thus modeling these tendencies can provide useful information in statistical language modeling. And although not directly related, behavioral information may also correlate with conversation-type and topic. Thus, we propose the use of psycholinguistic behavior signals as a gating mechanism to augment typical language models. Effectively inferring behaviors from sources like spoken text, written articles can lead to personification of the language models in the speaker-writer arena. <<</Introduction>>> <<<Methodology>>> In this section, we first describe a typical RNN based language model which serves as a baseline for this study. Second, we introduce the proposed behavior prediction model for extracting behavioral information. Finally, the proposed architecture of the language model which incorporates the behavioral information through a gating mechanism is presented. <<<Language Model>>> The basic RNNLM consists of a vanilla unidirectional LSTM which predicts the next word given the current and its word history at each time step. In other words, given a sequence of words $ \mathbf {x} \hspace{2.77771pt}{=}\hspace{2.77771pt}x_1, x_2, \ldots x_n$ as input, the network predicts a probability distribution of the next word $ y $ as $ P(y \mid \mathbf {x}) $. Figure FIGREF2 illustrates the basic architecture of the RNNLM. Since our contribution is towards introducing behavior as a psycholinguistic feature for aiding the language modeling process, we stick with a reliable and simple LSTM-based RNN model and follow the recommendations from BIBREF1 for our baseline model. <<</Language Model>>> <<<Behavior Model>>> The analysis and processing of human behavior informatics is crucial in many psychotherapy settings such as observational studies and patient therapy BIBREF17. Prior work has proposed the application of neural networks in modeling human behavior in a variety of clinical settings BIBREF18, BIBREF19, BIBREF20. In this work we adopt a behavior model that predicts the likelihood of occurrence of various behaviors based on input text. Our model is based on the RNN architecture in Figure FIGREF2, but instead of the next word we predict the joint probability of behavior occurrences $ P(\mathbf {B} \mid \mathbf {x}) $ where $ \mathbf {B} \hspace{2.77771pt}{=}\hspace{2.77771pt}\lbrace b_{i}\rbrace $ and $ b_{i} $ is the occurrence of behavior $i$. In this work we apply the behaviors: Acceptance, Blame, Negativity, Positivity, and Sadness. This is elaborated more on in Section SECREF3. <<</Behavior Model>>> <<<Behavior Gated Language Model>>> <<<Motivation>>> Behavior understanding encapsulates a long-term trajectory of a person's psychological state. Through the course of communication, these states may manifest as short-term instances of emotion or sentiment. Previous work has studied the links between these psychological states and their effect on vocabulary and choice of words BIBREF15 as well as use of language BIBREF16. Motivated from these studies, we hypothesize that due to the duality of behavior and language we can improve language models by capturing variability in language use caused by different psychological states through the inclusion of behavioral information. <<</Motivation>>> <<<Proposed Model>>> We propose to augment RNN language models with a behavior model that provides information relating to a speaker's psychological state. This behavioral information is combined with hidden layers of the RNNLM through a gating mechanism prior to output prediction of the next word. In contrast to typical language models, we propose to model $ P(\mathbf {y} \mid \mathbf {x}, \mathbf {z}) $ where $ \mathbf {z} \equiv f( P(\mathbf {B}\mid \mathbf {x}))$ for an RNN function $f(\cdot )$. The behavior model is implemented with a multi-layered RNN over the input sequence of words. The first recurrent layer of the behavior model is initialized with pre-trained weights from the model described in Section SECREF3 and fixed during language modeling training. An overview of the proposed behavior gated language model is shown in Figure FIGREF6. The RNN units shaded in green (lower section) denote the pre-trained weights from the behavior classification model which are fixed during the entirety of training. The abstract behavior outputs $ b_t $ of the pre-trained model are fed into a time-synced RNN, denoted in blue (upper section), which is subsequently used for gating the RNNLM predictions. The un-shaded RNN units correspond to typical RNNLM and operate in parallel to the former. <<</Proposed Model>>> <<</Behavior Gated Language Model>>> <<</Methodology>>> <<<Experimental Setup>>> <<<Data>>> <<<Behavior Related Corpora>>> For evaluating the proposed model on behavior related data, we employ the Couples Therapy Corpus (CoupTher) BIBREF21 and Cancer Couples Interaction Dataset (Cancer) BIBREF22. These are the targeted conditions under which a behavior-gated language model can offer improved performance. Couples Therapy Corpus: This corpus comprises of dyadic conversations between real couples seeking marital counseling. The dataset consists of audio, video recordings along with their transcriptions. Each speaker is rated by multiple annotators over 33 behaviors. The dataset comprises of approximately 0.83 million words with 10,000 unique entries of which 0.5 million is used for training (0.24m for dev and 88k for test). Cancer Couples Interaction Dataset: This dataset was gathered as part of a observational study of couples coping with advanced cancer. Advanced cancer patients and their spouse caregivers were recruited from clinics and asked to interact with each other in two structured discussions: neutral discussion and cancer related. Interactions were audio-recorded using small digital recorders worn by each participant. Manually transcribed audio has approximately 230,000 word tokens with a vocabulary size of 8173. <<<Couple's Therapy Corpus>>> We utilize the Couple's Therapy Corpus as an in-domain experimental corpus since our behavior classification model is also trained on the same. The RNNLM architecture is similar to BIBREF1, but with hyperparameters optimized for the couple's corpus. The results are tabulated in Table TABREF16 in terms of perplexity. We find that the behavior gated language models yield lower perplexity compared to vanilla LSTM language model. A relative improvement of 2.43% is obtained with behavior gating on the couple's data. <<</Couple's Therapy Corpus>>> <<<Cancer Couples Interaction Dataset>>> To evaluate the validity of the proposed method on an out-of-domain but behavior related task, we utilize the Cancer Couples Interaction Dataset. Here both the language and the behavior models are trained on the Couple's Therapy Corpus. The Cancer dataset is used only for development (hyper-parameter tuning) and testing. We observe that the behavior gating helps achieve lower perplexity values with a relative improvement of 6.81%. The performance improvements on an out-of-domain task emphasizes the effectiveness of behavior gated language models. <<</Cancer Couples Interaction Dataset>>> <<</Behavior Related Corpora>>> <<<Penn Tree Bank Corpus>>> In order to evaluate our proposed model on more generic language modeling tasks, we employ Penn Tree bank (PTB) BIBREF23, as preprocessed by BIBREF24. Since Penn Tree bank mainly comprises of articles from Wall Street Journal it is not expected to contain substantial expressions of behavior. <<<Previous state-of-the-art architectures>>> Finally we apply behavior gating on a previous state-of-the-art architecture, one that is most often used as a benchmark over various recent works. Specifically, we employ the AWD-LSTM proposed by BIBREF2 with QRNN BIBREF25 instead of LSTM. We observe positive results with AWD-LSTM augmented with behavior-gating providing a relative improvement of (1.42% on valid) 0.66% in perplexity (Table TABREF17). <<</Previous state-of-the-art architectures>>> <<</Penn Tree Bank Corpus>>> <<</Data>>> <<<Hyperparameters>>> We augmented previous RNN language model architectures by BIBREF1 and BIBREF2 with our proposed behavior gates. We used the same architecture as in each work to maintain similar number of parameters and performed a grid search of hyperparameters such as learning rate, dropout, and batch size. The number of layers and size of the final layers of the behavior model was also optimized. We report the results of models based on the best validation result. <<</Hyperparameters>>> <<</Experimental Setup>>> <<<Results>>> We split the results into two parts. We first validate the proposed technique on behavior related language modeling tasks and then apply it on more generic domain Penn Tree bank dataset. <<</Results>>> <<<Conclusion & Future Work>>> In this study, we introduce the state of the speaker/author into language modeling in the form of behavior signals. We track 5 behaviors namely acceptance, blame, negativity, positivity and sadness using a 5 class multi-label behavior classification model. The behavior states are used as gating mechanism for a typical RNN based language model. We show through our experiments that the proposed technique improves language modeling perplexity specifically in the case of behavior-rich scenarios. Finally, we show improvements on the previous state-of-the-art benchmark model with Penn Tree Bank Corpus to underline the affect of behavior states in language modeling. In future, we plan to incorporate the behavior-gated language model into the task of automatic speech recognition (ASR). In such scenario, we could derive both the past and the future behavior states from the ASR which could then be used to gate the language model using two pass re-scoring strategies. We expect the behavior states to be less prone to errors made by ASR over a sufficiently long context and hence believe the future behavior states to provide further improvements. <<</Conclusion & Future Work>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nMethodology\nLanguage Model\nBehavior Model\nBehavior Gated Language Model\nMotivation\nProposed Model\nExperimental Setup\nData\nBehavior Related Corpora\nCouple's Therapy Corpus\nCancer Couples Interaction Dataset\nPenn Tree Bank Corpus\nPrevious state-of-the-art architectures\nHyperparameters\nResults\nConclusion & Future Work" ], "type": "outline" }
2003.01006
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> The STEM-ECR Dataset: Grounding Scientific Entity References in STEM Scholarly Content to Authoritative Encyclopedic and Lexicographic Sources <<<Abstract>>> We introduce the STEM (Science, Technology, Engineering, and Medicine) Dataset for Scientific Entity Extraction, Classification, and Resolution, version 1.0 (STEM-ECR v1.0). The STEM-ECR v1.0 dataset has been developed to provide a benchmark for the evaluation of scientific entity extraction, classification, and resolution tasks in a domain-independent fashion. It comprises abstracts in 10 STEM disciplines that were found to be the most prolific ones on a major publishing platform. We describe the creation of such a multidisciplinary corpus and highlight the obtained findings in terms of the following features: 1) a generic conceptual formalism for scientific entities in a multidisciplinary scientific context; 2) the feasibility of the domain-independent human annotation of scientific entities under such a generic formalism; 3) a performance benchmark obtainable for automatic extraction of multidisciplinary scientific entities using BERT-based neural models; 4) a delineated 3-step entity resolution procedure for human annotation of the scientific entities via encyclopedic entity linking and lexicographic word sense disambiguation; and 5) human evaluations of Babelfy returned encyclopedic links and lexicographic senses for our entities. Our findings cumulatively indicate that human annotation and automatic learning of multidisciplinary scientific concepts as well as their semantic disambiguation in a wide-ranging setting as STEM is reasonable. <<</Abstract>>> <<<>>> 1.1em <<</>>> <<<Scientific Entity Annotations>>> By starting with a STEM corpus of scholarly abstracts for annotating with scientific entities, we differ from existing work addressing this task since we are going beyond the domain restriction that so far seems to encompass scientific IE. For entity annotations, we rely on existing scientific concept formalisms BIBREF0, BIBREF1, BIBREF2 that appear to propose generic scientific concept types that can bridge the domains we consider, thereby offering a uniform entity selection framework. In the following subsections, we describe our annotation task in detail, after which we conclude with benchmark results. <<<Our Annotation Process>>> The corpus for computing inter-annotator agreement was annotated by two postdoctoral researchers in Computer Science. To develop annotation guidelines, a small pilot annotation exercise was performed on 10 abstracts (one per domain) with a set of surmised generically applicable scientific concepts such as Task, Process, Material, Object, Method, Data, Model, Results, etc., taken from existing work. Over the course of three annotation trials, these concepts were iteratively pruned where concepts that did not cover all domains were dropped, resulting in four finalized concepts, viz. Process, Method, Material, and Data as our resultant set of generic scientific concepts (see Table TABREF3 for their definitions). The subsequent annotation task entailed linguistic considerations for the precise selection of entities as one of the four scientific concepts based on their part-of-speech tag or phrase type. Process entities were verbs (e.g., “prune” in Agr), verb phrases (e.g., “integrating results” in Mat), or noun phrases (e.g. “this transport process” in Bio); Method entities comprised noun phrases containing phrase endings such as simulation, method, algorithm, scheme, technique, system, etc.; Material were nouns or noun phrases (e.g., “forest trees” in Agr, “electrons” in Ast or Che, “tephra” in ES); and majority of the Data entities were numbers otherwise noun phrases (e.g., “(2.5$\pm $1.5)kms$^{-1}$” representing a velocity value in Ast, “plant available P status” in Agr). Summarily, the resulting annotation guidelines hinged upon the following five considerations: To ensure consistent scientific entity spans, entities were annotated as definite noun phrases where possible. In later stages, the extraneous determiners and articles could be dropped as deemed appropriate. Coreferring lexical units for scientific entities in the context of a single abstract were annotated with the same concept type. Quantifiable lexical units such as numbers (e.g., years 1999, measurements 4km) or even as phrases (e.g., vascular risk) were annotated as Data. Where possible, the most precise text reference (i.e., phrases with qualifiers) regarding materials used in the experiment were annotated. For instance, “carbon atoms in graphene” was annotated as a single Material entity and not separately as “carbon atoms,” “graphene.” Any confusion in classifying scientific entities as one of four types was resolved using the following concept precedence: Method $>$ Process $>$ Data $>$ Material, where the concept appearing earlier in the list was preferred. After finalizing the concepts and updating the guidelines, the final annotation task proceeded in two phases In phase I, five abstracts per domain (i.e. 50 abstracts) were annotated by both annotators and the inter-annotator agreement was computed using Cohen's $\kappa $ BIBREF4. Results showed a moderate inter-annotator agreement at 0.52 $\kappa $. Next, in phase II, one of the annotators interviewed subject specialists in each of the ten domains about the choice of concepts and her annotation decisions on their respective domain corpus. The feedback from the interviews were systematically categorized into error types and these errors were discussed by both annotators. Following these discussions, the 50 abstracts from phase I were independently reannotated. The annotators could obtain substantial overall agreement of 0.76 $\kappa $ after phase II. In Table TABREF16, we report the IAA scores obtained per domain and overall. The scores show that the annotators had a substantial agreement in seven domains, while only a moderate agreement was reached in three domains, viz. Agr, Mat, and Ast. <<<Annotation Error Analysis>>> We discuss some of the changes the interviewer annotator made in phase II after consultation with the subject experts. In total, 21% of the phase I annotations were changed: Process accounted for a major proportion (nearly 54%) of the changes. Considerable inconsistency was found in annotating verbs like “increasing”, “decreasing”, “enhancing”, etc., as Process or not. Interviews with subject experts confirmed that they were a relevant detail to the research investigation and hence should be annotated. So 61% of the Process changes came from additionally annotating these verbs. Material was the second predominantly changed concept in phase II, accounting for 23% of the overall changes. Nearly 32% of the changes under Material came from consistently reannotating phrases about models, tools, and systems; accounting for another 22% of its changes, where spatial locations were an essential part of the investigation such as in the Ast and ES domains, they were decided to be included in the phase II set as Material. Finally, there were some changes that emerged from lack of domain expertise. This was mainly in the medical domain (4.3% of the overall changes) in resolving confusion in annotating Process and Method concept types. Most of the remaining changes were based on the treatment of conjunctive spans or lists. Subsequently, the remaining 60 abstracts (six per domain) were annotated by one annotator. This last phase also involved reconciliation of the earlier annotated 50 abstracts to obtain a gold standard corpus. <<</Annotation Error Analysis>>> <<<Annotated Corpus Characteristics>>> Table TABREF17 shows our annotated corpus characteristics. Our corpus comprises a total of 6,127 scientific entities, including 2,112 Process, 258 Method, 2,099 Material, and 1,658 Data entities. The number of entities per abstract directly correlates with the length of the abstracts (Pearson's R 0.97). Among the concepts, Process and Material directly correlate with abstract length (R 0.8 and 0.83, respectively), while Data has only a slight correlation (R 0.35) and Method has no correlation (R 0.02). In Figure FIGREF18, we show an example instance of a manually created text graph from the scientific entities in one abstract. The graph highlights that linguistic relations such as synonymy, hypernymy, meronymy, as well as OpenIE relations are poignant even between scientific entities. <<</Annotated Corpus Characteristics>>> <<<Annotation Task Tools>>> During the annotation procedure, each annotator was shown the entities, grouped by domain and file name, in Google Excel Sheet columns alongside a view of the current abstract of entities being annotated in the BRAT interface stenetorp2012brat for context information about the entities. For entity resolution, i.e. linking and disambiguation, the annotators had local installations of specific time-stamped Wikipedia and Wiktionary dumps to enable future persistent references to the links since the Wiki sources are actively revised. They queried the local dumps using the DKPro JWPL tool BIBREF8 for Wikipedia and the DKPro JWKTL tool BIBREF9 for Wiktionary, where both tools enable optimized search through the large Wiki data volume. <<</Annotation Task Tools>>> <<<Annotation Procedure for Entity Resolution>>> Through iterative pilot annotation trials on the same pilot dataset as before, the annotators delineated an ordered annotation procedure depicted in the flowchart in Figure FIGREF28. There are two main annotation phases, viz. a preprocessing phase (determining linkability, determining whether an entity is decomposable into shorter collocations), and the entity resolution phase. The actual annotation task then proceeded, in which to compute agreement scores, the annotators worked on the same set of 50 scholarly abstracts that they had used earlier to compute the scores for the scientific entity annotations. <<<Linkability.>>> In this first step, entities that conveyed a sense of scientific jargon were deemed linkable. A natural question that arises, in the context of the Linkability criteria, is: Which stage 1 annotated scientific entities were now deemed unlinkable? They were 1) Data entities that are numbers; 2) entities that are coreference mentions which, as isolated units, lost their precise sense (e.g., “development”); and 3) Process verbs (e.g., “decreasing”, “reconstruct”, etc.). Still, having identified these cases, a caveat remained: except for entities of type Data, the remaining decisions made in this step involved a certain degree of subjectivity because, for instance, not all Process verbs were unlinkable (e.g., “flooding”). Nonetheless, at the end of this step, the annotators obtained a high IAA score at 0.89 $\kappa $. From the agreement scores, we found that the Linkability decisions could be made reliably and consistently on the data. <<</Linkability.>>> <<<Splitting phrases into shorter collocations.>>> While preference was given to annotating non-compositional noun phrases as scientific entities in stage 1, consecutive occurrences of entities of the same concept type separated only by prepositions or conjunctions were merged into longer spans. As examples, consider the phrases “geysers on south polar region,” and “plume of water ice molecules and dust” in Figure FIGREF18. These phrases, respectively, can be meaningfully split as “geysers” and “south polar region” for the first example, and “plume”, “water ice molecules”, and “dust” for the second. As demonstrated in these examples, the stage 1 entities we split in this step are syntactically-flexible multi-word expressions which did not have a strict constraint on composition BIBREF10. For such expressions, we query Wikipedia or Google to identify their splits judging from the number of results returned and whether, in the results, the phrases appeared in authoritative sources (e.g., as overview topics in publishing platforms such as ScienceDirect). Since search engines operate on a vast amount of data, they are a reliable source for determining phrases with a strong statistical regularity, i.e. determining collocations. With a focus on obtaining agreement scores for entity resolution, the annotators bypass this stage for computing independent agreement and attempted it mutually as follows. One annotator determined all splits, wherever required, first. The second annotator acted as judge by going through all the splits and proposed new splits in case of disagreement. The disagreements were discussed by both annotators and the previous steps were repeated iteratively until the dataset was uniformly split. After this stage, both annotators have the same set of entities for resolution. <<</Splitting phrases into shorter collocations.>>> <<<Entity Resolution (ER) Annotation.>>> In this stage, the annotators resolved each entity from the previous step to encyclopedic and lexicographic knowledge bases. While, in principle, multiple knowledge sources can be leveraged, this study only examines scientific entities in the context of their Wiki-linkability. Wikipedia, as the largest online encyclopedia (with nearly 5.9 million English articles) offers a wide coverage of real-world entities, and based on its vast community of editors with editing patterns at the rate of 1.8 edits per second, is considered a reliable source of information. It is pervasively adopted in automatic EL tasks BIBREF11, BIBREF12, BIBREF13 to disambiguate the names of people, places, organizations, etc., to their real-world identities. We shift from this focus on proper names as the traditional Wikification EL purpose has been, to its, thus far, seemingly less tapped-in conceptual encyclopedic knowledge of nominal scientific entities. Wiktionary is the largest freely available dictionary resource. Owing to its vast community of curators, it rivals the traditional expert-curated lexicographic resource WordNet BIBREF14 in terms of coverage and updates, where the latter evolves more slowly. For English, Wiktionary has nine times as many entries and at least five times as many senses compared to WordNet. As a more pertinent neologism in the context of our STEM data, consider the sense of term “dropout” as a method for regularizing the neural network algorithms which is already present in Wiktionary. While WSD has been traditionally used WordNet for its high-quality semantic network and longer prevalence in the linguistics community (c.f Navigli navigli2009word for a comprehensive survey), we adopt Wiktionary thus maintaining our focus on collaboratively curated resources. In WSD, entities from all parts-of-speech are enriched w.r.t. language and wordsmithing. But it excludes in-depth factual and encyclopedic information, which otherwise is contained in Wikipedia. Thus, Wikipedia and Wiktionary are viewed as largely complementary. <<</Entity Resolution (ER) Annotation.>>> <<<ER Annotation Task formalism.>>> Given a scholarly abstract $A$ comprising a set of entities $E = \lbrace e_{1}, ... ,e_{N}\rbrace $, the annotation goal is to produce a mapping from $E$ to a set of Wikipedia pages ($p_1,...,p_N$) and Wiktionary senses ($s_1,...,s_N$) as $R = \lbrace (p_1,s_1), ... , (p_N,s_N)\rbrace $. For entities without a mapping, the corresponding $p$ or $s$ refers to Nil. The annotators followed comprehensive guidelines for ER including exceptions. E.g., the conjunctive phrase “acid/alkaline phosphatase activity” was semantically treated as the following two phrases “acid phosphatase activity” or “alkaline phosphatase activity” for EL, however, in the text it was retained as “acid” and “alkaline phosphatase activity.” Since WSD is performed over exact word-forms without assuming any semantic extension, it was not performed for “acid.” Annotations were also made for complex forms of reference such as meronymy (e.g., space instrument “CAPS” to spacecraft “wiki:Cassini Huygens” of which it is a part), or hypernymy (e.g., “parents” in “genepool parents” to “wiki:Ancestor”). As a result of the annotation task, the annotators obtained 82.87% rate of agreement in the EL task and a $\kappa $ score of 0.86 in the WSD task. Contrary to WSD expectations as a challenging linguistics task BIBREF15, we show high agreement; this we attribute to the entities' direct scientific sense and availability in Wiktionary (e.g., “dropout”). Subsequently, the ER annotation for the remaining 60 abstracts (six per domain) were performed by one annotator. This last phase also involved reconciliation of the earlier annotated 50 abstracts to obtain a gold standard corpus. <<</ER Annotation Task formalism.>>> <<</Annotation Procedure for Entity Resolution>>> <<</Our Annotation Process>>> <<<Performance Benchmark>>> In the second stage of the study, we perform word sense disambiguation and link our entities to authoritative sources. <<</Performance Benchmark>>> <<</Scientific Entity Annotations>>> <<<Scientific Entity Resolution>>> Aside from the four scientific concepts facilitating a common understanding of scientific entities in a multidisciplinary setting, the fact that they are just four made the human annotation task feasible. Utilizing additional concepts would have resulted in a prohibitively expensive human annotation task. Nevertheless, there are existing datasets (particularly in the biomedical domain, e.g., GENIA BIBREF6) that have adopted the conceptual framework in rich domain-specific semantic ontologies. Our work, while related, is different since we target the annotation of multidisciplinary scientific entities that facilitates a low annotation entrance barrier to producing such data. This is beneficial since it enables the task to be performed in a domain-independent manner by researchers, but perhaps not crowdworkers, unless screening tests for a certain level of scientific expertise are created. Nonetheless, we recognize that the four categories might be too limiting for real-world usage. Further, the scientific entities from stage 1 remain susceptible to subjective interpretation without additional information. Therefore, in a similar vein to adopting domain-specific ontologies, we now perform entity linking (EL) to the Wikipedia and word sense disambiguation (WSD) to Wiktionary. <<<Evaluation>>> We do not observe a significant impact of the long-tailed list phenomenon of unresolved entities in our data (c.f Table TABREF36 only 17% did not have EL annotations). Results on more recent publications should perhaps serve more conclusive in this respect for new concepts introduced–the abstracts in our dataset were published between 2012 and 2014. <<</Evaluation>>> <<</Scientific Entity Resolution>>> <<<Conclusion>>> The STEM-ECR v1.0 corpus of scientific abstracts offers multidisciplinary Process, Method, Material, and Data entities that are disambiguated using Wiki-based encyclopedic and lexicographic sources thus facilitating links between scientific publications and real-world knowledge (see the concepts enrichment we obtain from Wikipedia for our entities in Figure ). We have found that these Wikipedia categories do enable a semantic enrichment of our entities over our generic four concept formalism as Process, Material, Method, and Data (as an illustration, the top 30 Wiki categories for each of our four generic concept types are shown in the Appendix). Further, considering the various domains in our multidisciplinary STEM corpus, notably, the inclusion of understudied domains like Mathematics, Astronomy, Earth Science, and Material Science makes our corpus particularly unique w.r.t. the investigation of their scientific entities. This is a step toward exploring domain independence in scientific IE. Our corpus can be leveraged for machine learning experiments in several settings: as a vital active-learning test-bed for curating more varied entity representations BIBREF16; to explore domain-independence versus domain-dependence aspects in scientific IE; for EL and WSD extensions to other ontologies or lexicographic sources; and as a knowledge resource to train a reading machine (such as PIKES BIBREF17 or FRED BIBREF18) that generate more knowledge from massive streams of interdisciplinary scientific articles. We plan to extend this corpus with relations to enable building knowledge representation models such as knowledge graphs in a domain-independent manner. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Title\nAbstract\n\nScientific Entity Annotations\nOur Annotation Process\nAnnotation Error Analysis\nAnnotated Corpus Characteristics\nAnnotation Task Tools\nAnnotation Procedure for Entity Resolution\nLinkability.\nSplitting phrases into shorter collocations.\nEntity Resolution (ER) Annotation.\nER Annotation Task formalism.\nPerformance Benchmark\nScientific Entity Resolution\nEvaluation\nConclusion" ], "type": "outline" }
1912.06927
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> #MeTooMA: Multi-Aspect Annotations of Tweets Related to the MeToo Movement <<<Abstract>>> In this paper, we present a dataset containing 9,973 tweets related to the MeToo movement that were manually annotated for five different linguistic aspects: relevance, stance, hate speech, sarcasm, and dialogue acts. We present a detailed account of the data collection and annotation processes. The annotations have a very high inter-annotator agreement (0.79 to 0.93 k-alpha) due to the domain expertise of the annotators and clear annotation instructions. We analyze the data in terms of geographical distribution, label correlations, and keywords. Lastly, we present some potential use cases of this dataset. We expect this dataset would be of great interest to psycholinguists, socio-linguists, and computational linguists to study the discursive space of digitally mobilized social movements on sensitive issues like sexual harassment. <<</Abstract>>> <<<Introduction>>> Over the last couple of years, the MeToo movement has facilitated several discussions about sexual abuse. Social media, especially Twitter, was one of the leading platforms where people shared their experiences of sexual harassment, expressed their opinions, and also offered support to victims. A large portion of these tweets was tagged with a dedicated hashtag #MeToo, and it was one of the main trending topics in many countries. The movement was viral on social media and the hashtag used over 19 million times in a year. The MeToo movement has been described as an essential development against the culture of sexual misconduct by many feminists, activists, and politicians. It is one of the primary examples of successful digital activism facilitated by social media platforms. The movement generated many conversations on stigmatized issues like sexual abuse and violence, which were not often discussed before because of the associated fear of shame or retaliation. This creates an opportunity for researchers to study how people express their opinion on a sensitive topic in an informal setting like social media. However, this is only possible if there are annotated datasets that explore different linguistic facets of such social media narratives. Twitter served as a platform for many different types of narratives during the MeToo movement BIBREF0. It was used for sharing personal stories of abuse, offering support and resources to victims, and expressing support or opposition towards the movement BIBREF1. It was also used to allege individuals of sexual misconduct, refute such claims, and sometimes voice hateful or sarcastic comments about the campaign or individuals. In some cases, people also misused hashtag to share irrelevant or uninformative content. To capture all these complex narratives, we decided to curate a dataset of tweets related to the MeToo movement that is annotated for various linguistic aspects. In this paper, we present a new dataset (MeTooMA) that contains 9,973 tweets associated with the MeToo movement annotated for relevance, stance, hate speech, sarcasm, and dialogue acts. We introduce and annotate three new dialogue acts that are specific to the movement: Allegation, Refutation, and Justification. The dataset also contains geographical information about the tweets: from which country it was posted. We expect this dataset would be of great interest and use to both computational and socio-linguists. For computational linguists, it provides an opportunity to model three new complex dialogue acts (allegation, refutation, and justification) and also to study how these acts interact with some of the other linguistic components like stance, hate, and sarcasm. For socio-linguists, it provides an opportunity to explore how a movement manifests in social media across multiple countries. <<</Introduction>>> <<<Related Datasets>>> Table TABREF3 presents a summary of datasets that contain social media posts about sexual abuse and annotated for various labels. BIBREF2 created a dataset of 2,500 tweets for identification of malicious intent surrounding the cases of sexual assault. The tweets were annotated for labels like accusational, validation, sensational. Khatua et al BIBREF3 collected 0.7 million tweets containing hashtags such as #MeToo, #AlyssaMilano, #harassed. The annotated a subset of 1024 tweets for the following assault-related labels: assault at the workplace by colleagues, assault at the educational institute by teachers or classmates, assault at public places by strangers, assault at home by a family member, multiple instances of assaults, or a generic tweet about sexual violence. BIBREF4 created the Reddit Domestic Abuse Dataset, which contained 18,336 posts annotated for 2 classes, abuse and non-abuse. BIBREF5 presented a dataset consisting of 5119 tweets distributed into recollection and non-recollection classes. The tweet was annotated as recollection if it explicitly mentioned a personal instance of sexual harassment. Sharifirad et al BIBREF6 created a dataset with 3240 tweets labeled into three categories of sexism: Indirect sexism, casual sexism, physical sexism. SVAC (Sexual Violence in Armed Conflict) is another related dataset which contains reports annotated for six different aspects of sexual violence: prevalence, perpetrators, victims, forms, location, and timing. Unlike all the datasets described above, which are annotated for a single group of labels, our dataset is annotated for five different linguistic aspects. It also has more annotated samples than most of its contemporaries. <<</Related Datasets>>> <<<Dataset>>> <<<Data Collection>>> We focused our data collection over the period of October to December 2018 because October marked the one year anniversary of the MeToo movement. Our first step was to identify a list of countries where the movement was trending during the data collection period. To this end, we used Google's interactive tool named MeTooRisingWithGoogle, which visualizes search trends of the term "MeToo" across the globe. This helped us narrow down our query space to 16 countries. We then scraped 500 random posts from online sexual harassment support forums to help identify keywords or phrases related to the movement . The posts were first manually inspected by the annotators to determine if they were related to the MeToo movement. Namely, if they contained self-disclosures of sexual violence, relevant information about the events associated with the movement, references to news articles or advertisements calling for support for the movement. We then processed the relevant posts to extract a set of uni-grams and bi-grams with high tf-idf scores. The annotators further pruned this set by removing irrelevant terms resulting in a lexicon of 75 keywords. Some examples include: #Sexual Harassment, #TimesUp, #EveryDaySexism, assaulted, #WhenIwas, inappropriate, workplace harassment, groped, #NotOkay, believe survivors, #WhyIDidntReport. We then used Twitter's public streaming API to query for tweets from the selected countries, over the chosen three-month time frame, containing any of the keywords. This resulted in a preliminary corpus of 39,406 tweets. We further filtered this data down to include only English tweets based on tweet's language metadata field and also excluded short tweets (less than two tokens). Lastly, we de-duplicated the dataset based on the textual content. Namely, we removed all tweets that had more than 0.8 cosine similarity score on the unaltered text in tf-idf space with any another tweet. We employed this de-duplication to promote more lexical diversity in the dataset. After this filtering, we ended up with a corpus of 9,973 tweets. Table TABREF14 presents the distribution of the tweets by country before and after the filtering process. A large portion of the samples is from India because the MeToo movement has peaked towards the end of 2018 in India. There are very few samples from Russia likely because of content moderation and regulations on social media usage in the country. Figure FIGREF15 gives a geographical distribution of the curated dataset. Due to the sensitive nature of this data, we have decided to remove any personal identifiers (such as names, locations, and hyperlinks) from the examples presented in this paper. We also want to caution the readers that some of the examples in the rest of the paper, though censored for profanity, contain offensive language and express a harsh sentiment. <<</Data Collection>>> <<<Annotation Task>>> We chose against crowd-sourcing the annotation process because of the sensitive nature of the data and also to ensure a high quality of annotations. We employed three domain experts who had advanced degrees in clinical psychology and gender studies. The annotators were first provided with the guidelines document, which included instructions about each task, definitions of class labels, and examples. They studied this document and worked on a few examples to familiarize themselves with the annotation task. They also provided feedback on the document, which helped us refine the instructions and class definitions. The annotation process was broken down into five sub-tasks: for a given tweet, the annotators were instructed to identify relevance, stance, hate speech, sarcasm, and dialogue act. An important consideration was that the sub-tasks were not mutually exclusive, implying that the presence of one label did not consequently mean an absence of any. <<<Task 1: Relevance>>> Here the annotators had to determine if the given tweet was relevant to the MeToo movement. Relevant tweets typically include personal opinions (either positive or negative), experiences of abuse, support for victims, or links to MeToo related news articles. Following are examples of a relevant tweet: Officer [name] could be kicked out of the force after admitting he groped a woman at [place] festival last year. His lawyer argued saying the constable shouldn't be punished because of the #MeToo movement. #notokay #sexualabuse. and an irrelevant tweet: Had a bit of break. Went to the beautiful Port [place] and nearby areas. Absolutely stunning as usual. #beautiful #MeToo #Australia #auspol [URL]. We expect this relevance annotation could serve as a useful filter for downstream modeling. <<</Task 1: Relevance>>> <<<Task 2: Stance>>> Stance detection is the task of determining if the author of a text is in favour or opposition of a particular target of interest BIBREF7, BIBREF8. Stance helps understand public opinion about a topic and also has downstream applications in information extraction, text summarization, and textual entailment BIBREF9. We categorized stance into three classes: Support, Opposition, Neither. Support typically included tweets that expressed appreciation of the MeToo movement, shared resources for victims of sexual abuse, or offered empathy towards victims. Following is an example of a tweet with a Support stance: Opinion: #MeToo gives a voice to victims while bringing attention to a nationwide stigma surrounding sexual misconduct at a local level.[URL]. This should go on. On the other hand, Opposition included tweets expressing dissent over the movement or demonstrating indifference towards the victims of sexual abuse or sexual violence. An example of an Opposition tweet is shown below: The double standards and selective outrage make it clear that feminist concerns about power imbalances in the workplace aren't principles but are tools to use against powerful men they hate and wish to destroy. #fakefeminism. #men. <<</Task 2: Stance>>> <<<Task 3: Hate Speech>>> Detection of hate speech in social media has been gaining interest from NLP researchers lately BIBREF10, BIBREF11. Our annotation scheme for hate speech is based on the work of BIBREF12. For a given tweet, the annotators first had to determine if it contained any hate speech. If the tweet was hateful, they had to identify if the hate was Directed or Generalized. Directed hate is targeted at a particular individual or entity, whereas Generalized hate is targeted at larger groups that belonged to a particular ethnicity, gender, or sexual orientation. Following are examples of tweets with Directed hate: [username] were lit minus getting f*c*i*g mouthraped by some drunk chick #MeToo (no body cares because I'm a male) [URL] and Generalized hate: For the men who r asking "y not then, y now?", u guys will still doubt her & harrass her even more for y she shared her story immediately no matter what! When your sister will tell her childhood story to u one day, i challenge u guys to ask "y not then, y now?" #Metoo [username] [URL] #a**holes. <<</Task 3: Hate Speech>>> <<<Task 4: Sarcasm>>> Sarcasm detection has also become a topic of interest for computational linguistics over the last few years BIBREF13, BIBREF14 with applications in areas like sentiment analysis and affective computing. Sarcasm was an integral part of the MeToo movement. For example, many women used the hashtag #NoWomanEver to sarcastically describe some of their experiences with harassment. We instructed the annotators to identify the presence of any sarcasm in a tweet either about the movement or about an individual or entity. Following is an example of a sarcastic tweet: # was pound before it was a hashtag. If you replace hashtag with the pound in the #metoo, you get pound me too. Does that apply to [name]. <<</Task 4: Sarcasm>>> <<<Task 5: Dialogue Acts>>> A dialogue act is defined as the function of a speaker's utterance during a conversation BIBREF15, for example, question, answer, request, suggestion, etc. Dialogue Acts have been extensive studied in spoken BIBREF16 and written BIBREF17 conversations and have lately been gaining interest in social media BIBREF18. In this task, we introduced three new dialogue acts that are specific to the MeToo movement: Allegation, Refutation, and Justification. Allegation: This category includes tweets that allege an individual or a group of sexual misconduct. The tweet could either be personal opinion or text summarizing allegations made against someone BIBREF19. The annotators were instructed to identify if the tweet includes the hypothesis of allegation based on first-hand account or a verifiable source confirming the allegation. Following is an example of a tweet that qualifies as an Allegation: More women accuse [name] of grave sexual misconduct...twitter seethes with anger. #MeToo #pervert. Refutation: This category contains tweets where an individual or an organization is denying allegations with or without evidence. Following is an example of a Refutation tweet: She is trying to use the #MeToo movement to settle old scores, says [name1] after [name2] levels sexual assault allegations against him. Justification: The class includes tweets where the author is justifying their actions. These could be alleged actions in the real world (e.g. allegation of sexual misconduct) or some action performed on twitter (e.g. supporting someone who was alleged of misconduct). Following is an example of a tweet that would be tagged as Justification: I actually did try to report it, but he and of his friends got together and lied to the police about it. #WhyIDidNotReport. <<</Task 5: Dialogue Acts>>> <<</Annotation Task>>> <<</Dataset>>> <<<Dataset Analysis>>> This section includes descriptive and quantitative analysis performed on the dataset. <<<Inter-annotator agreement>>> We evaluated inter-annotator agreements using Krippendorff's alpha (K-alpha) BIBREF20. K-alpha, unlike simple agreement measures, accounts for chance correction and class distributions and can be generalized to multiple annotators. Table TABREF27 summarizes the K-alpha measures for all the annotation tasks. We observe very strong agreements for most of the tasks with a maximum of 0.92 for the relevance task. The least agreement observed was for the hate speech task at 0.78. Per recommendations in BIBREF21, we conclude that these annotations are of good quality. We chose a straightforward approach of majority decision for label adjudication: if two or more annotators agreed on assigning a particular class label. In cases of discrepancy, the labels were adjudicated manually by the authors. Table TABREF28 shows a distribution of class labels after adjudication. <<</Inter-annotator agreement>>> <<<Geographical Distribution>>> Figure FIGREF24 presents a distribution of all the tweets by their country of origin. As expected, a large portion of the tweets across all classes are from India, which is consistent with Table TABREF14. Interestingly, the US contributes comparatively a smaller proportion of tweets to Justification category, and likewise, UK contributes a lower portion of tweets to the Generalized Hate category. Further analysis is necessary to establish if these observations are statistically significant. <<</Geographical Distribution>>> <<<Label Correlations>>> We conducted a simple experiment to understand the linguistic similarities (or lack thereof) for different pairs of class labels both within and across tasks. To this end, for each pair of labels, we converted the data into its tf-idf representation and then estimated Pearson, Spearman, and Kendall Tau correlation coefficients and also the corresponding $p$ values. The results are summarized in Table TABREF32. Overall, the correlation values seem to be on a lower end with maximum Pearson's correlation value obtained for the label pair Justification - Support, maximum Kendall Tau's correlation for Allegation - Support, and maximum Spearman's correlation for Directed Hate - Generalized Hate. The correlations are statistically significant ($p$ $<$ 0.05) for three pairs of class labels: Directed Hate - Generalized Hate, Directed Hate - Opposition, Sarcasm - Opposition. Sarcasm and Allegation also have statistically significant $p$ values for Pearson and Spearman correlations. <<</Label Correlations>>> <<<Keywords>>> We used SAGE BIBREF22, a topic modelling method, to identify keywords associated with the various class labels in our dataset. SAGE is an unsupervised generative model that can identify words that distinguish one part of the corpus from rest. For our keyword analysis, we removed all the hashtags and only considered tokens that appeared at least five times in the corpus, thus ensuring they were representative of the topic. Table TABREF25 presents the top five keywords associated with each class and also their salience scores. Though Directed and Generalized hate are closely related topics, there is not much overlap between the top 5 salient keywords suggesting that there are linguistic cues to distinguish between them. The word predators is strongly indicative of Generalized Hate, which is intuitive because it is a term often used to describe people who were accused of sexual misconduct. The word lol being associated with Sarcasm is also reasonably intuitive because of sarcasm's close relation with humour. <<</Keywords>>> <<<Sentiment Analysis>>> Figure FIGREF29 presents a word cloud representation of the data where the colours are assigned based on NRC emotion lexicon BIBREF23: green for positive and red for negative. We also analyzed all the classes in terms of Valence, Arousal, and Dominance using the NRC VAD lexicon BIBREF24. The results are summarized in Figure FIGREF33. Of all the classes, Directed-Hate has the largest valence spread, which is likely because of the extreme nature of the opinions expressed in such tweets. The spread for the dominance is fairly narrow for all class labels with the median score slightly above 0.5, suggesting a slightly dominant nature exhibited by the authors of the tweets. <<</Sentiment Analysis>>> <<</Dataset Analysis>>> <<<Discussion>>> This paper introduces a new dataset containing tweets related to the #MeToo movement. It may involve opinions over socially stigmatized issues or self-reports of distressing incidents. Therefore, it is necessary to examine the social impact of this exercise, the ethics of the individuals concerned with the dataset, and it's limitations. Mental health implications: This dataset open sources posts curated by individuals who may have undergone instances of sexual exploitation in the past. While we respect and applaud their decision to raise their voices against their exploitation, we also understand that their revelations may have been met with public backlash and apathy in both the virtual as well as the real world. In such situations, where the social reputation of both accuser and accused may be under threat, mental health concerns become very important. As survivors recount their horrific episodes of sexual harassment, it becomes imperative to provide them with therapeutic care BIBREF25 as a safeguard against mental health hazards. Such measures, if combined with the integration of mental health assessment tools in social media platforms, can make victims of sexual abuse feel more empowered and self-contemplative towards their revelations. Use of MeTooMA dataset for population studies: We would like to mention that there have been no attempts to conduct population-centric analysis on the proposed dataset. The analysis presented in this dataset should be seen as a proof of concept to examine the instances of #MeToo movement on Twitter. The authors acknowledge that learning from this dataset cannot be used as-is for any direct social interventions. Network sampling of real-world users for any experimental work beyond this dataset would require careful evaluation beyond the observational analysis presented herein. Moreover, the findings could be used to assist already existing human knowledge. Experiences of the affected communities should be recorded and analyzed carefully, which could otherwise lead to social stigmatization, discrimination and societal bias. Enough care has been ensured so that this work does not come across as trying to target any specific individual for their personal stance on the issues pertaining to the social theme at hand. The authors do not aim to vilify individuals accused in the #MeToo cases in any manner. Our work tries to bring out general trends that may help researchers develop better techniques to understand mass unorganized virtual movements. Effect on marginalized communities: The authors recognize the impact of the #MeToo movement on socially stigmatized populations like LGBTQIA+. The #MeToo movement provided such individuals with the liberty to express their notions about instances of sexual violence and harassment. The movement acted as a catalyst towards implementing social policy changes to benefit the members of these communities. Hence, it is essential to keep in mind that any experimental work undertaken on this dataset should try to minimize the biases against the minority groups which might get amplified in cases of sudden outburst of public reactions over sensitive media discussions. Limitations of individual consent: Considering the mental health aspects of the individuals concerned, social media practitioners should vary of making automated interventions to aid the victims of sexual abuse as some individuals might not prefer to disclose their sexual identities or notions. Concerned social media users might also repeal their social media information if found out that their personal information may be potentially utilised for computational analysis. Hence, it is imperative to seek subtle individual consent before trying to profile authors involved in online discussions to uphold personal privacy. <<</Discussion>>> <<<Use Cases>>> The authors would like to formally propose some ideas on possible extensions of the proposed dataset: The rise of online hate speech and its related behaviours like cyber-bullying has been a hot topic of research in gender studies BIBREF26. Our dataset could be utilized for extracting actionable insights and virtual dynamics to identify gender roles for analyzing sexual abuse revelations similar to BIBREF27. The dataset could be utilized by psycholinguistics for extracting contextualized lexicons to examine how influential people are portrayed on public platforms in events of mass social media movements BIBREF28. Interestingly, such analysis may help linguists determine the power dynamics of authoritative people in terms of perspective and sentiment through campaign modelling. Marginalized voices affected by mass social movements can be studied through polarization analysis on graph-based simulations of the social media networks. Based on the data gathered from these nodes, community interactions could be leveraged to identify indigenous issues pertaining to societal unrest across various sections of the societyBIBREF29. Challenge Proposal: The authors of the paper would like to extend the present work as a challenge proposal for building computational semantic analysis systems aimed at online social movements. In contrast to already available datasets and existing challenges, we propose tasks on detecting hate speech, sarcasm, stance and relevancy that will be more focused on social media activities surrounding revelations of sexual abuse and harassment. The tasks may utilize the message-level text, linked images, tweet-level metadata and user-level interactions to model systems that are Fair, Accountable, Interpretable and Responsible (FAIR). Research ideas emerging from this work should not be limited to the above discussion. If needed, supplementary data required to enrich this dataset can be collected utilizing Twitter API and JSON records for exploratory tasks beyond the scope of the paper. <<</Use Cases>>> <<<Conclusion>>> In this paper, we presented a new dataset annotated for five different linguistic aspects: relevance, stance, hate speech, sarcasm, and dialogue acts. To our knowledge, there are no datasets out there that provide annotations across so many different dimensions. This allows researchers to perform various multi-label and multi-aspect classification experiments. Additionally, researchers could also address some interesting questions on how different linguistic components influence each other: e.g. does understanding one's stance help in better prediction of hate speech? In addition to these exciting computational challenges, we expect this data could be useful for socio and psycholinguists in understanding the language used by victims when disclosing their experiences of abuse. Likewise, they could analyze the language used by alleged individuals in justifying their actions. It also provides a chance to examine the language used to express hate in the context of sexual abuse. In the future, we would like to propose challenge tasks around this data where the participants will have to build computational models to capture all the different linguistic aspects that were annotated. We expect such a task would drive researchers to ask more interesting questions, find limitations of the dataset, propose improvements, and provide interesting insights. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nRelated Datasets\nDataset\nData Collection\nAnnotation Task\nTask 1: Relevance\nTask 2: Stance\nTask 3: Hate Speech\nTask 4: Sarcasm\nTask 5: Dialogue Acts\nDataset Analysis\nInter-annotator agreement\nGeographical Distribution\nLabel Correlations\nKeywords\nSentiment Analysis\nDiscussion\nUse Cases\nConclusion" ], "type": "outline" }
1909.01247
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Introducing RONEC -- the Romanian Named Entity Corpus <<<Abstract>>> We present RONEC - the Named Entity Corpus for the Romanian language. The corpus contains over 26000 entities in ~5000 annotated sentences, belonging to 16 distinct classes. The sentences have been extracted from a copy-right free newspaper, covering several styles. This corpus represents the first initiative in the Romanian language space specifically targeted for named entity recognition. It is available in BRAT and CoNLL-U Plus formats, and it is free to use and extend at github.com/dumitrescustefan/ronec . <<</Abstract>>> <<<Introduction>>> Language resources are an essential component in entire R&D domains. From the humble but vast repositories of monolingual texts that are used by the newest language modeling approaches like BERT and GPT, to parallel corpora that allows our machine translation systems to inch closer to human performance, to the more specialized resources like WordNets that encode semantic relations between nodes, these resources are necessary for the general advancement of Natural Language Processing, which eventually evolves into real apps and services we are (already) taking for granted. We introduce RONEC - the ROmanian Named Entity Corpus, a free, open-source resource that contains annotated named entities in copy-right free text. A named entity corpus is generally used for Named Entity Recognition (NER): the identification of entities in text such as names of persons, locations, companies, dates, quantities, monetary values, etc. This information would be very useful for any number of applications: from a general information extraction system down to task-specific apps such as identifying monetary values in invoices or product and company references in customer reviews. We motivate the need for this corpus primarily because, for Romanian, there is no other such corpus. This basic necessity has sharply arisen as we, while working on a different project, have found out there are no usable resources to help us in an Information Extraction task: we were unable to extract people, locations or dates/values. This constituted a major road-block, with the only solution being to create such a corpus ourselves. As the corpus was out-of-scope for this project, the work was done privately, outside the umbrella of any authors' affiliations - this is why we are able to distribute this corpus completely free. The current landscape in Romania regarding language resources is relatively unchanged from the outline given by the META-NET project over six years ago. The in-depth analysis performed in this European-wide Horizon2020-funded project revealed that the Romanian language falls in the "fragmentary support" category, just above the last, "weak/none" category (see the language/support matrix in BIBREF3). This is why, in 2019/2020, we are able to present the first NER resource for Romanian. <<<Related corpora>>> We note that, while fragmentary, there are a few related language resources available, but none that specifically target named entities: <<<ROCO corpus>>> ROCO BIBREF4 is a Romanian journalistic corpus that contains approx. 7.1M tokens. It is rich in proper names, numerals and named entities. The corpus has been automatically annotated at word-level with morphosyntactic information (MSD annotations). <<</ROCO corpus>>> <<<ROMBAC corpus>>> Released in 2016, ROMBAC BIBREF5 is a Romanian text corpus containing 41M words divided in relatively equal domains like journalism, legalese, fiction, medicine, etc. Similarly to ROCO, it is automatically annotated at word level with MSD descriptors. <<</ROMBAC corpus>>> <<<CoRoLa corpus>>> The much larger and recently released CoRoLa corpus BIBREF6 contains over 1B words, similarly automatically annotated. In all these corpora the named entities are not a separate category - the texts are morphologically and syntactically annotated and all proper nouns are marked as such - NP - without any other annotation or assigned category. Thus, these corpora cannot be used in a true NER sense. Furthermore, annotations were done automatically with a tokenizer/tagger/parser, and thus are of slightly lower quality than one would expect of a gold-standard corpus. <<</CoRoLa corpus>>> <<</Related corpora>>> <<</Introduction>>> <<<Corpus Description>>> The corpus, at its current version 1.0 is composed of 5127 sentences, annotated with 16 classes, for a total of 26377 annotated entities. The 16 classes are: PERSON, NAT_REL_POL, ORG, GPE, LOC, FACILITY, PRODUCT, EVENT, LANGUAGE, WORK_OF_ART, DATETIME, PERIOD, MONEY, QUANTITY, NUMERIC_VALUE and ORDINAL. It is based on copyright-free text extracted from Southeast European Times (SETimes). The news portal has published “news and views from Southeast Europe” in ten languages, including Romanian. SETimes has been used in the past for several annotated corpora, including parallel corpora for machine translation. For RONEC we have used a hand-picked selection of sentences belonging to several categories (see table TABREF16 for stylistic examples). The corpus contains the standard diacritics in Romanian: letters ș and ț are written with a comma, not with a cedilla (like ş and ţ). In Romanian many older texts are written with cedillas instead of commas because full Unicode support in Windows came much later than the classic extended Ascii which only contained the cedilla letters. The 16 classes are inspired by the OntoNotes5 corpus BIBREF7 as well as the ACE (Automatic Content Extraction) English Annotation Guidelines for Entities Version 6.6 2008.06.13 BIBREF8. Each class will be presented in detail, with examples, in the section SECREF3 A summary of available classes with word counts for each is available in table TABREF18. The corpus is available in two formats: BRAT and CoNLL-U Plus. <<<BRAT format>>> As the corpus was developed in the BRAT environment, it was natural to keep this format as-is. BRAT is an online environment for collaborative text annotation - a web-based tool where several people can mark words, sub-word pieces, multiple word expressions, can link them together by relations, etc. The back-end format is very simple: given a text file that contains raw sentences, in another text file every annotated entity is specified by the start/end character offset as well as the entity type, one per line. RONEC is exported in the BRAT format as ready-to-use in the BRAT annotator itself. The corpus is pre-split into sub-folders, and contains all the extra files such as the entity list, etc, needed to directly start an eventual edit/extension of the corpus. Example (raw/untokenized) sentences: Tot în cadrul etapei a 2-a, a avut loc întâlnirea Vardar Skopje - S.C. Pick Szeged, care s-a încheiat la egalitate, 24 - 24. I s-a decernat Premiul Nobel pentru literatură pe anul 1959. Example annotation format: T1 ORDINAL 21 26 a 2-a T2 ORGANIZATION 50 63 Vardar Skopje T3 ORGANIZATION 66 82 S.C. Pick Szeged T4 NUMERIC_VALUE 116 118 24 T5 NUMERIC_VALUE 121 123 24 T6 DATETIME 175 184 anul 1959 <<</BRAT format>>> <<<CoNLL-U Plus format>>> The CoNLL-U Plus format extends the standard CoNLL-U which is used to annotate sentences, and in which many corpora are found today. The CoNLL-U format annotates one word per line with 10 distinct "columns" (tab separated): nolistsep ID: word index; FORM: unmodified word from the sentence; LEMMA: the word's lemma; UPOS: Universal part-of-speech tag; XPOS: Language-specific part-of-speech tag; FEATS: List of morphological features from the universal feature inventory or from a defined language-specific extension; HEAD: Head of the current word, which is either a value of ID or zero; DEPREL: Universal dependency relation to the HEAD or a defined language-specific subtype of one; DEPS: Enhanced dependency graph in the form of a list of head-deprel pairs; MISC: Miscellaneous annotations such as space after word. The CoNLL-U Plus extends this format by allowing a variable number of columns, with the restriction that the columns are to be defined in the header. For RONEC, we define our CoNLL-U Plus format as the standard 10 columns plus another extra column named RONEC:CLASS. This column has the following format: nolistsep [noitemsep] each named entity has a distinct id in the sentence, starting from 1; as an entity can span several words, all words that belong to it have the same id (no relation to word indexes) the first word belonging to an entity also contains its class (e.g. word "John" in entity "John Smith" will be marked as "1:PERSON") a non-entity word is marked with an asterisk * Table TABREF37 shows the CoNLL-U Plus format where for example "a 2-a" is an ORDINAL entity spanning 3 words. The first word "a" is marked in this last column as "1:ORDINAL" while the following words just with the id "1". The CoNLL-U Plus format we provide was created as follows: (1) annotate the raw sentences using the NLP-Cube tool for Romanian (it provides everything from tokenization to parsing, filling in all attributes in columns #1-#10; (2) align each token with the human-made entity annotations from the BRAT environment (the alignment is done automatically and is error-free) and fill in column #11. <<</CoNLL-U Plus format>>> <<</Corpus Description>>> <<<Classes and Annotation Methodology>>> For the English language, we found two "categories" of NER annotations to be more prominent: CoNLL- and ACE-style. Because CoNLL only annotates a few classes (depending on the corpora, starting from the basic three: PERSON, ORGANIZATION and LOCATION, up to seven), we chose to follow the ACE-style with 18 different classes. After analyzing the ACE guide we have settled on 16 final classes that seemed more appropriate for Romanian, seen in table TABREF18. In the following sub-sections we will describe each class in turn, with a few examples. Some examples have been left in Romanian while some have been translated in English for the reader's convenience. In the examples at the end of each class' description, translations in English are colored for easier reading. <<<PERSON>>> Persons, including fictive characters. We also mark common nouns that refer to a person (or several), including pronouns (us, them, they), but not articles (e.g. in "an individual" we don't mark "an"). Positions are not marked unless they directly refer to the person: "The presidential counselor has advised ... that a new counselor position is open.", here we mark "presidential counselor" because it refers to a person and not the "counselor" at the end of the sentence as it refers only to a position. Locul doi i-a revenit româncei Otilia Aionesei, o elevă de 17 ani. green!55!blueThe second place was won by Otilia Aionesei, a 17 year old student. Ministrul bulgar pentru afaceri europene, Meglena Kuneva ... green!55!blueThe Bulgarian Minister for European Affairs, Meglena Kuneva ... <<</PERSON>>> <<<NAT_REL_POL>>> These are nationalities or religious or political groups. We include words that indicate the nationality of a person, group or product/object. Generally words marked as NAT_REl_POL are adjectives. avionul american green!55!bluethe American airplane Grupul olandez green!55!bluethe Dutch group Grecii iși vor alege președintele. green!55!blueThe Greeks will elect their president. <<</NAT_REL_POL>>> <<<ORGANIZATION>>> Companies, agencies, institutions, sports teams, groups of people. These entities must have an organizational structure. We only mark full organizational entities, not fragments, divisions or sub-structures. Universitatea Politehnica București a decis ... green!55!blueThe Politehnic University of Bucharest has decided ... Adobe Inc. a lansat un nou produs. green!55!blueAdobe Inc. has launched a new product. <<</ORGANIZATION>>> <<<GPE>>> Geo-political entities: countries, counties, cities, villages. GPE entities have all of the following components: (1) a population, (2) a well-defined governing/organizing structure and (3) a physical location. GPE entities are not sub-entities (like a neighbourhood from a city). Armin van Buuren s-a născut în Leiden. green!55!blueArmin van Buuren was born in Leiden. U.S.A. ramane indiferentă amenințărilor Coreei de Nord. green!55!blueU.S.A. remains indifferent to North Korea's threats. <<</GPE>>> <<<LOC>>> Non-geo-political locations: mountains, seas, lakes, streets, neighbourhoods, addresses, continents, regions that are not GPEs. We include regions such as Middle East, "continents" like Central America or East Europe. Such regions include multiple countries, each with its own government and thus cannot be GPEs. Pe DN7 Petroșani-Obârșia Lotrului carosabilul era umed, acoperit (cca 1 cm) cu zăpadă, iar de la Obârșia Lotrului la stațiunea Vidra, stratul de zăpadă era de 5-6 cm. green!55!blueOn DN7 Petroșani-Obârșia Lotrului the road was wet, covered (about 1cm) with snow, and from Obârșia Lotrului to Vidra resort the snow depth was around 5-6 cm. Produsele comercializate în Europa de Est au o calitate inferioară celor din vest. green!55!blueProducts sold in East Europe have a lower quality than those sold in the west. <<</LOC>>> <<<FACILITY>>> Buildings, airports, highways, bridges or other functional structures built by humans. Buildings or other structures which house people, such as homes, factories, stadiums, office buildings, prisons, museums, tunnels, train stations, etc., named or not. Everything that falls within the architectural and civil engineering domains should be labeled as a FACILITY. We do not mark structures composed of multiple (and distinct) sub-structures, like a named area that is composed of several buildings, or "micro"-structures such as an apartment (as it a unit of an apartment building). However, larger, named functional structures can still be marked (such as "terminal X" of an airport). Autostrada A2 a intrat în reparații pe o bandă, însă pe A1 nu au fost încă începute lucrările. green!55!blueRepairs on one lane have commenced on the A2 highway, while on A1 no works have started yet. Aeroportul Henri Coandă ar putea sa fie extins cu un nou terminal. green!55!blueHenri Coandă Airport could be extended with a new terminal. <<</FACILITY>>> <<<PRODUCT>>> Objects, cars, food, items, anything that is a product, including software (such as Photoshop, Word, etc.). We don't mark services or processes. With very few exceptions (such as software products), PRODUCT entities have to have physical form, be directly man-made. We don't mark entities such as credit cards, written proofs, etc. We don't include the producer's name unless it's embedded in the name of the product. Mașina cumpărată este o Mazda. green!55!blueThe bought car is a Mazda. S-au cumpărat 5 Ford Taurus și 2 autobuze Volvo. green!55!blue5 Ford Taurus and 2 Volvo buses have been acquired. <<</PRODUCT>>> <<<EVENT>>> Named events: Storms (e.g.:"Sandy"), battles, wars, sports events, etc. We don't mark sports teams (they are ORGs), matches (e.g. "Steaua-Rapid" will be marked as two separate ORGs even if they refer to a football match between the two teams, but the match is not specific). Events have to be significant, with at least national impact, not local. Războiul cel Mare, Războiul Națiunilor, denumit, în timpul celui de Al Doilea Război Mondial, Primul Război Mondial, a fost un conflict militar de dimensiuni mondiale. green!55!blueThe Great War, War of the Nations, as it was called during the Second World War, the First World War was a global-scale military conflict. <<</EVENT>>> <<<LANGUAGE>>> This class represents all languages. Românii din România vorbesc română. green!55!blueRomanians from Romania speak Romanian. În Moldova se vorbește rusa și româna. green!55!blueIn Moldavia they speak Russian and Romanian. <<</LANGUAGE>>> <<<WORK_OF_ART>>> Books, songs, TV shows, pictures; everything that is a work of art/culture created by humans. We mark just their name. We don't mark laws. Accesul la Mona Lisa a fost temporar interzis vizitatorilor. green!55!blueAccess to Mona Lisa was temporarily forbidden to visitors. În această seară la Vrei sa Fii Miliardar vom avea un invitat special. green!55!blueThis evening in Who Wants To Be A Millionaire we will have a special guest. <<</WORK_OF_ART>>> <<<DATETIME>>> Date and time values. We will mark full constructions, not parts, if they refer to the same moment (e.g. a comma separates two distinct DATETIME entities only if they refer to distinct moments). If we have a well specified period (e.g. "between 20-22 hours") we mark it as PERIOD, otherwise less well defined periods are marked as DATETIME (e.g.: "last summer", "September", "Wednesday", "three days"); Ages are marked as DATETIME as well. Prepositions are not included. Te rog să vii aici în cel mult o oră, nu mâine sau poimâine. green!55!bluePlease come here in one hour at most, not tomorrow or the next day. Actul s-a semnat la orele 16. green!55!blueThe paper was signed at 16 hours. August este o lună secetoasă. green!55!blueAugust is a dry month. Pe data de 20 martie între orele 20-22 va fi oprită alimentarea cu curent. green!55!blueOn the 20th of March, between 20-22 hours, electricity will be cut-off. <<</DATETIME>>> <<<PERIOD>>> Periods/time intervals. Periods have to be very well marked in text. If a period is not like "a-b" then it is a DATETIME. Spectacolul are loc între 1 și 3 Aprilie. green!55!blueThe show takes place between 1 and 3 April. În prima jumătate a lunii iunie va avea loc evenimentul de două zile. green!55!blueIn the first half of June the two-day event will take place. <<</PERIOD>>> <<<MONEY>>> Money, monetary values, including units (e.g. USD, $, RON, lei, francs, pounds, Euro, etc.) written with number or letters. Entities that contain any monetary reference, including measuring units, will be marked as MONEY (e.g. 10$/sqm, 50 lei per hour). Words that are not clear values will not be marked, such as "an amount of money", "he received a coin". Primarul a semnat un contract în valoare de 10 milioane lei noi, echivalentul a aproape 2.6m EUR. green!55!blueThe mayor signed a contract worth 10 million new lei, equivalent of almost 2.6m EUR. <<</MONEY>>> <<<QUANTITY>>> Measurements, such as weight, distance, etc. Any type of quantity belongs in this class. Conducătorul auto avea peste 1g/ml alcool în sânge, fiind oprit deoarece a fost prins cu peste 120 km/h în localitate. green!55!blueThe car driver had over 1g/ml blood alcohol, and was stopped because he was caught speeding with over 120km/h in the city. <<</QUANTITY>>> <<<NUMERIC_VALUE>>> Any numeric value (including phone numbers), written with letters or numbers or as percents, which is not MONEY, QUANTITY or ORDINAL. Raportul XII-2 arată 4 552 de investitori, iar structura de portofoliu este: cont curent 0,05%, certificate de trezorerie 66,96%, depozite bancare 13,53%, obligațiuni municipale 19,46%. green!55!blueThe XII-2 report shows 4 552 investors, and the portfolio structure is: current account 0,05%, treasury bonds 66,96%, bank deposits 13,53%, municipal bonds 19,46%. <<</NUMERIC_VALUE>>> <<<ORDINAL>>> The first, the second, last, 30th, etc.; An ordinal must imply an order relation between elements. For example, "second grade" does not involve a direct order relation; it indicates just a succession in grades in a school system. Primul loc a fost ocupat de echipa Germaniei. green!55!blueThe first place was won by Germany's team. The corpus creation process involved a small number of people that have voluntarily joined the initiative, with the authors of this paper directing the work. Initially, we searched for NER resources in Romanian, and found none. Then we looked at English resources and read the in-depth ACE guide, out of which a 16-class draft evolved. We then identified a copy-right free text from which we hand-picked sentences to maximize the amount of entities while maintaining style balance. The annotation process was a trial-and-error, with cycles composed of annotation, discussing confusing entities, updating the annotation guide schematic and going through the corpus section again to correct entities following guide changes. The annotation process was done online, in BRAT. The actual annotation involved 4 people, has taken about 6 months (as work was volunteer-based, we could not have reached for 100% time commitment from the people involved), and followed the steps: nolistsep Each person would annotate the full corpus (this included the cycles of shaping up the annotation guide, and re-annotation). Inter-annotator agreement (ITA) at this point was relatively low, at 60-70%, especially for a number of classes. We then automatically merged all annotations, with the following criterion: if 3 of the 4 annotators agreed on an entity (class&start-stop), then it would go unchanged; otherwise mark the entity (longest span) as CONFLICTED. Two teams were created, each with two persons. Each team annotated the full corpus again, starting from the previous step. At this point, class-average ITA has risen to over 85%. Next, the same automatic merging happened, this time entities remained unchanged if both annotations agreed. Finally, one of the authors went through the full corpus one more time, correcting disagreements. We would like to make a few notes regarding classes and inter-annotator agreements: nolistsep [noitemsep] Classes like ORGANIZATION, NAT_REL_POL, LANGUAGE or GPEs have the highest ITA, over 98%. They are pretty clear and distinct from other classes. The DATETIME class also has a high ITA, with some overlap with PERIOD: annotators could fall-back if they were not sure that an expression was a PERIOD and simply mark it as DATETIME. WORK_OF_ART and EVENTs have caused some problems because the scope could not be properly defined from just one sentence. For example, a fair in a city could be a local event, but could also be a national periodic event. MONEY, QUANTITY and ORDINAL all are more specific classes than NUMERIC_VALUE. So, in cases where a numeric value has a unit of measure by it, it should become a QUANTITY, not a NUMERIC_VALUE. However, this "specificity" has created some confusion between these classes, just like with DATETIME and PERIOD. The ORDINAL class is a bit ambiguous, because, even though it ranks "higher" than NUMERIC_VALUE, it is the least diverse, most of the entities following the same patterns. PRODUCT and FACILITY classes have the lowest ITA by far (less than 40% in the first annotation cycle, less than 70% in the second). We actually considered removing these classes from the annotation process, but to try to mimic the OntoNotes classes as much as possible we decided to keep them in. There were many cases where the annotators disagreed about the scope of words being facilities or products. Even in the ACE guidelines these two classes are not very well "documented" with examples of what is and what is not a PRODUCT or FACILITY. Considering that these classes are, in our opinion, of the lowest importance among all the classes, a lower ITA was accepted. Finally, we would like to address the "semantic scope" of the entities - for example, for class PERSON, we do not annotate only proper nouns (NPs) but basically any reference to a person (e.g. through pronouns "she", job position titles, common nouns such as "father", etc.). We do this because we would like a high-coverage corpus, where entities are marked as more semantically-oriented rather than syntactically - in the same way ACE entities are more encompassing than CoNLL entities. We note that, for example, if one would like strict proper noun entities, it is very easy to extract from a PERSON multi-word entity only those words which are syntactically marked (by any tagger) as NPs. <<</ORDINAL>>> <<</Classes and Annotation Methodology>>> <<<Conclusions>>> We have presented RONEC - the first Named Entity Corpus for the Romanian language. At its current version, in its 5127 sentences we have 26377 annotated entities in 16 different classes. The corpus is based on copy-right free text, and is released as open-source, free to use and extend. We hope that in time this corpus will grow in size and mature towards a strong resource for Romanian. For this to happen we have released the corpus in two formats: CoNLL-U PLus, which is a text-based tab-separated pre-tokenized and annotated format that is simple to use, and BRAT, which is practically plug-and-play into the BRAT web annotation tool where anybody can add and annotate new sentences. Also, in the GitHub repo there are automatic alignment and conversion script to and from the two formats so they could easily be exported between. Finally, we have also provided an annotation guide that we will improve, and in time evolve into a full annotation document like the ACE Annotation Guidelines for Entities V6.6 BIBREF8. <<</Conclusions>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nRelated corpora\nROCO corpus\nROMBAC corpus\nCoRoLa corpus\nCorpus Description\nBRAT format\nCoNLL-U Plus format\nClasses and Annotation Methodology\nPERSON\nNAT_REL_POL\nORGANIZATION\nGPE\nLOC\nFACILITY\nPRODUCT\nEVENT\nLANGUAGE\nWORK_OF_ART\nDATETIME\nPERIOD\nMONEY\nQUANTITY\nNUMERIC_VALUE\nORDINAL\nConclusions" ], "type": "outline" }
1912.01220
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Modelling Semantic Categories using Conceptual Neighborhood <<<Abstract>>> While many methods for learning vector space embeddings have been proposed in the field of Natural Language Processing, these methods typically do not distinguish between categories and individuals. Intuitively, if individuals are represented as vectors, we can think of categories as (soft) regions in the embedding space. Unfortunately, meaningful regions can be difficult to estimate, especially since we often have few examples of individuals that belong to a given category. To address this issue, we rely on the fact that different categories are often highly interdependent. In particular, categories often have conceptual neighbors, which are disjoint from but closely related to the given category (e.g.\ fruit and vegetable). Our hypothesis is that more accurate category representations can be learned by relying on the assumption that the regions representing such conceptual neighbors should be adjacent in the embedding space. We propose a simple method for identifying conceptual neighbors and then show that incorporating these conceptual neighbors indeed leads to more accurate region based representations. <<</Abstract>>> <<<Introduction>>> Vector space embeddings are commonly used to represent entities in fields such as machine learning (ML) BIBREF0, natural language processing (NLP) BIBREF1, information retrieval (IR) BIBREF2 and cognitive science BIBREF3. An important point, however, is that such representations usually represent both individuals and categories as vectors BIBREF4, BIBREF5, BIBREF6. Note that in this paper, we use the term category to denote natural groupings of individuals, as it is used in cognitive science, with individuals referring to the objects from the considered domain of discourse. For example, the individuals carrot and cucumber belong to the vegetable category. We use the term entities as an umbrella term covering both individuals and categories. Given that a category corresponds to a set of individuals (i.e. its instances), modelling them as (possibly imprecise) regions in the embedding space seems more natural than using vectors. In fact, it has been shown that the vector representations of individuals that belong to the same category are indeed often clustered together in learned vector space embeddings BIBREF7, BIBREF8. The view of categories being regions is also common in cognitive science BIBREF3. However, learning region representations of categories is a challenging problem, because we typically only have a handful of examples of individuals that belong to a given category. One common assumption is that natural categories can be modelled using convex regions BIBREF3, which simplifies the estimation problem. For instance, based on this assumption, BIBREF9 modelled categories using Gaussian distributions and showed that these distributions can be used for knowledge base completion. Unfortunately, this strategy still requires a relatively high number of training examples to be successful. However, when learning categories, humans do not only rely on examples. For instance, there is evidence that when learning the meaning of nouns, children rely on the default assumption that these nouns denote mutually exclusive categories BIBREF10. In this paper, we will in particular take advantage of the fact that many natural categories are organized into so-called contrast sets BIBREF11. These are sets of closely related categories which exhaustively cover some sub-domain, and which are assumed to be mutually exclusive; e.g. the set of all common color names, the set $\lbrace \text{fruit},\text{vegetable}\rbrace $ or the set $\lbrace \text{NLP}, \text{IR}, \text{ML}\rbrace $. Categories from the same contrast set often compete for coverage. For instance, we can think of the NLP domain as consisting of research topics that involve processing textual information which are not covered by the IR and ML domains. Categories which compete for coverage in this way are known as conceptual neighbors BIBREF12; e.g. NLP and IR, red and orange, fruit and vegetable. Note that the exact boundary between two conceptual neighbors may be vague (e.g. tomato can be classified as fruit or as vegetable). In this paper, we propose a method for learning region representations of categories which takes advantage of conceptual neighborhood, especially in scenarios where the number of available training examples is small. The main idea is illustrated in Figure FIGREF2, which depicts a situation where we are given some examples of a target category $C$ as well as some related categories $N_1,N_2,N_3,N_4$. If we have to estimate a region from the examples of $C$ alone, the small elliptical region shown in red would be a reasonable choice. More generally, a standard approach would be to estimate a Gaussian distribution from the given examples. However, vector space embeddings typically have hundreds of dimensions, while the number of known examples of the target category is often far lower (e.g. 2 or 3). In such settings we will almost inevitably underestimate the coverage of the category. However, in the example from Figure FIGREF2, if we take into account the knowledge that $N_1,N_2,N_3,N_4$ are conceptual neighbors of $C$, the much larger, shaded region becomes a more natural choice for representing $C$. Indeed, the fact that e.g. $C$ and $N_1$ are conceptual neighbors suggests that any point in between the examples of these categories needs to be contained either in the region representing $C$ or the region representing $N_1$. In the spirit of prototype approaches to categorization BIBREF13, without any further information it makes sense to assume that their boundary is more or less half-way in between the known examples. The contribution of this paper is two-fold. First, we propose a method for identifying conceptual neighbors from text corpora. We essentially treat this problem as a standard text classification problem, by relying on categories with large numbers of training examples to generate a suitable distant supervision signal. Second, we show that the predicted conceptual neighbors can effectively be used to learn better category representations. <<</Introduction>>> <<<Related Work>>> In distributional semantics, categories are frequently modelled as vectors. For example, BIBREF14 study the problem of deciding for a word pair $(i,c)$ whether $i$ denotes an instance of the category $c$, which they refer to as instantiation. They treat this problem as a binary classification problem, where e.g. the pair (AAAI, conference) would be a positive example, while (conference, AAAI) and (New York, conference) would be negative examples. Different from our setting, their aim is thus essentially to model the instantiation relation itself, similar in spirit to how hypernymy has been modelled in NLP BIBREF15, BIBREF16. To predict instantiation, they use a simple neural network model which takes as input the word vectors of the input pair $(i,c)$. They also experiment with an approach that instead models a given category as the average of the word vectors of its known instances and found that this led to better results. A few authors have already considered the problem of learning region representations of categories. Most closely related, BIBREF17 model ontology concepts using Gaussian distributions. In BIBREF18 DBLP:conf/ecai/JameelS16, a model is presented which embeds Wikipedia entities such that entities which have the same WikiData type are characterized by some region within a low-dimensional subspace of the embedding. Within the context of knowledge graph embedding, several approaches have been proposed that essentially model semantic types as regions BIBREF19, BIBREF20. A few approaches have also been proposed for modelling word meaning using regions BIBREF21, BIBREF22 or Gaussian distributions BIBREF23. Along similar lines, several authors have proposed approaches inspired by probabilistic topic modelling, which model latent topics using Gaussians BIBREF24 or related distributions BIBREF25. On the other hand, the notion of conceptual neighborhood has been covered in most detail in the field of spatial cognition, starting with the influential work of BIBREF12. In computational linguistics, moreover, this representation framework aligns with lexical semantics traditions where word meaning is constructed in terms of semantic decomposition, i.e. lexical items being minimally decomposed into structured forms (or templates) rather than sets of features BIBREF26, effectively mimicking a sort of conceptual neighbourhood. In Pustejovsky's generative lexicon, a set of “semantic devices” is proposed such that they behave in semantics similarly as grammars do in syntax. Specifically, this framework considers the qualia structure of a lexical unit as a set of expressive semantic distinctions, the most relevant for our purposes being the so-called formal role, which is defined as “that which distinguishes the object within a larger domain”, e.g. shape or color. This semantic interplay between cognitive science and computational linguistics gave way to the term lexical coherence, which has been used for contextualizing the meaning of words in terms of how they relate to their conceptual neighbors BIBREF27, or by providing expressive lexical semantic resources in the form of ontologies BIBREF28. <<</Related Work>>> <<<Model Description>>> Our aim is to introduce a model for learning region-based category representations which can take advantage of knowledge about the conceptual neighborhood of that category. Throughout the paper, we focus in particular on modelling categories from the BabelNet taxonomy BIBREF29, although the proposed method can be applied to any resource which (i) organizes categories in a taxonomy and (ii) provides examples of individuals that belong to these categories. Selecting BabelNet as our use case is a natural choice, however, given its large scale and the fact that it integrates many lexical and ontological resources. As the possible conceptual neighbors of a given BabelNet category $C$, we consider all its siblings in the taxonomy, i.e. all categories $C_1,...,C_k$ which share a direct parent with $C$. To select which of these siblings are most likely to be conceptual neighbors, we look at mentions of these categories in a text corpus. As an illustrative example, consider the pair (hamlet,village) and the following sentence: In British geography, a hamlet is considered smaller than a village and ... From this sentence, we can derive that hamlet and village are disjoint but closely related categories, thus suggesting that they are conceptual neighbors. However, training a classifier that can identify conceptual neighbors from such sentences is complicated by the fact that conceptual neighborhood is not covered in any existing lexical resource, to the best of our knowledge, which means that large sets of training examples are not readily available. To address this lack of training data, we rely on a distant supervision strategy. The central insight is that for categories with a large number of known instances, we can use the embeddings of these instances to check whether two categories are conceptual neighbors. In particular, our approach involves the following three steps: Identify pairs of categories that are likely to be conceptual neighbors, based on the vector representations of their known instances. Use the pairs from Step 1 to train a classifier that can recognize sentences which indicate that two categories are conceptual neighbors. Use the classifier from Step 2 to predict which pairs of BabelNet categories are conceptual neighbors and use these predictions to learn category representations. Note that in Step 1 we can only consider BabelNet categories with a large number of instances, while the end result in Step 3 is that we can predict conceptual neighborhood for categories with only few known instances. We now discuss the three aforementioned steps one by one. <<<Step 1: Predicting Conceptual Neighborhood from Embeddings>>> Our aim here is to generate distant supervision labels for pairs of categories, indicating whether they are likely to be conceptual neighbors. These labels will then be used in Section SECREF12 to train a classifier for predicting conceptual neighborhood from text. Let $A$ and $B$ be siblings in the BabelNet taxonomy. If enough examples of individuals belonging to these categories are provided in BabelNet, we can use these instances to estimate high-quality representations of $A$ and $B$, and thus estimate whether they are likely to be conceptual neighbors. In particular, we split the known instances of $A$ into a training set $I^A_{\textit {train}}$ and test set $I^A_{\textit {test}}$, and similar for $B$. We then train two types of classifiers. The first classifier estimates a Gaussian distribution for each category, using the training instances in $I^A_{\textit {train}}$ and $I^B_{\textit {train}}$ respectively. This should provide us with a reasonable representation of $A$ and $B$ regardless of whether they are conceptual neighbors. In the second approach, we first learn a Gaussian distribution from the joint set of training examples $I^A_{\textit {train}} \cup I^B_{\textit {train}}$ and then train a logistic regression classifier to separate instances from $A$ and $B$. In particular, note that in this way, we directly impose the requirement that the regions modelling $A$ and $B$ are adjacent in the embedding space (intuitively corresponding to two halves of a Gaussian distribution). We can thus expect that the second approach should lead to better predictions than the first approach if $A$ and $B$ are conceptual neighbors and to worse predictions if they are not. In particular, we propose to use the relative performance of the two classifiers as the required distant supervision signal for predicting conceptual neighborhood. We now describe the two classification models in more detail, after which we explain how these models are used to generate the distant supervision labels. Gaussian Classifier The first classifier follows the basic approach from BIBREF17, where Gaussian distributions were similarly used to model WikiData categories. In particular, we estimate the probability that an individual $e$ with vector representation $\mathbf {e}$ is an instance of the category $A$ as follows: where $\lambda _A$ is the prior probability of belonging to category $A$, the likelihood $f(\mathbf {e} | A)$ is modelled as a Gaussian distribution and $f(\mathbf {e})$ will also be modelled as a Gaussian distribution. Intuitively, we think of the Gaussian $f(. | A)$ as defining a soft region, modelling the category $A$. Given the high-dimensional nature of typical vector space embeddings, we use a mean field approximation: Where $d$ is the number of dimensions in the vector space embedding, $e_i$ is the $i^{\textit {th}}$ coordinate of $\mathbf {e}$, and $f_i(. | A)$ is a univariate Gaussian. To estimate the parameters $\mu _i$ and $\sigma _i^2$ of this Gaussian, we use a Bayesian approach with a flat prior: where $G(e_i;\mu _i,\sigma _i^2)$ represents the Gaussian distribution with mean $\mu _i$ and variance $\sigma _i^2$ and NI$\chi ^{2}$ is the normal inverse-$\chi ^{2}$ distribution. In other words, instead of using a single estimate of the mean $\mu $ and variance $\sigma _2$ we average over all plausible choices of these parameters. The use of the normal inverse-$\chi ^{2}$ distribution for the prior on $\mu _i$ and $\sigma _i^2$ is a common choice, which has the advantage that the above integral simplifies to a Student-t distribution. In particular, we have: where we assume $I^A_{\textit {train}}= \lbrace a_1,...,a_n\rbrace $, $a_i^j$ denotes the $i^{\textit {th}}$ coordinate of the vector embedding of $a_j$, $\overline{x_i} = \frac{1}{n}\sum _{j=1}^n a_i^j$ and $t_{n-1}$ is the Student t-distribution with $n-1$ degrees of freedom. The probability $f(\mathbf {e})$ is estimated in a similar way, but using all BabelNet instances. The prior $\lambda _A$ is tuned based on a validation set. Finally, we classify $e$ as a positive example if $P(A|\mathbf {e}) > 0.5$. GLR Classifier. We first train a Gaussian classifier as in Section UNKREF9, but now using the training instances of both $A$ and $B$. Let us denote the probability predicted by this classifier as $P(A\cup B | \textbf {e})$. The intuition is that entities for which this probability is high should either be instances of $A$ or of $B$, provided that $A$ and $B$ are conceptual neighbors. If, on the other hand, $A$ and $B$ are not conceptual neighbors, relying on this assumption is likely to lead to errors (i.e. there may be individuals whose representation is in between $A$ and $B$ which are not instances of either), which is what we need for generating the distant supervision labels. If $P(A\cup B | \textbf {e}) > 0.5$, we assume that $e$ either belongs to $A$ or to $B$. To distinguish between these two cases, we train a logistic regression classifier, using the instances from $I^A_{\textit {train}}$ as positive examples and those from $I^B_{\textit {train}}$ as negative examples. Putting everything together, we thus classify $e$ as a positive example for $A$ if $P(A\cup B | \textbf {e})>0.5$ and $e$ is classified as a positive example by the logistic regression classifier. Similarly, we classfiy $e$ as a positive example for $B$ if $P(A\cup B | \textbf {e})>0.5$ and $e$ is classified as a negative example by the logistic regression classifier. We will refer to this classification model as GLR (Gaussian Logistic Regression). <<<Generating Distant Supervision Labels>>> To generate the distant supervision labels, we consider a ternary classification problem for each pair of siblings $A$ and $B$. In particular, the task is to decide for a given individual $e$ whether it is an instance of $A$, an instance of $B$, or an instance of neither (where only disjoint pairs $A$ and $B$ are considered). For the Gaussian classifier, we predict $A$ iff $P(A|\mathbf {e})>0.5$ and $P(A|\mathbf {e}) > P(B|\mathbf {e})$. For the GLR classifier, we predict $A$ if $P(A\cup B|\mathbf {e}) >0.5$ and the associated logistic regression classifier predicts $A$. The condition for predicting $B$ is analogous. The test examples for this ternary classification problem consist of the elements from $I^A_{\textit {test}}$ and $I^B_{\textit {test}}$, as well as some negative examples (i.e. individuals that are neither instances of $A$ nor $B$). To select these negative examples, we first sample instances from categories that have the same parent as $A$ and $B$, choosing as many such negative examples as we have positive examples. Second, we also sample the same number of negative examples from randomly selected categories in the taxonomy. Let $F^1_{AB}$ be the F1 score achieved by the Gaussian classifier and $F^2_{AB}$ the F1 score of the GLR classifier. Our hypothesis is that $F^1_{AB} \ll F^2_{AB}$ suggests that $A$ and $B$ are conceptual neighbors, while $F^1_{AB} \gg F^2_{AB}$ suggests that they are not. This intuition is captured in the following score: where we consider $A$ and $B$ to be conceptual neighbors if $s_{AB}\gg 0.5$. <<</Generating Distant Supervision Labels>>> <<</Step 1: Predicting Conceptual Neighborhood from Embeddings>>> <<<Step 2: Predicting Conceptual Neighborhood from Text>>> We now consider the following problem: given two BabelNet categories $A$ and $B$, predict whether they are likely to be conceptual neighbors based on the sentences from a text corpus in which they are both mentioned. To train such a classifier, we use the distant supervision labels from Section SECREF8 as training data. Once this classifier has been trained, we can then use it to predict conceptual neighborhood for categories for which only few instances are known. To find sentences in which both $A$ and $B$ are mentioned, we rely on a disambiguated text corpus in which mentions of BabelNet categories are explicitly tagged. Such a disambiguated corpus can be automatically constructed, using methods such as the one proposed by BIBREF30 mancini-etal-2017-embedding, for instance. For each pair of candidate categories, we thus retrieve all sentences where they co-occur. Next, we represent each extracted sentence as a vector. To this end, we considered two possible strategies: Word embedding averaging: We compute a sentence embedding by simply averaging the word embeddings of each word within the sentence. Despite its simplicity, this approach has been shown to provide competitive results BIBREF31, in line with more expensive and sophisticated methods e.g. based on LSTMs. Contextualized word embeddings: The recently proposed contextualized embeddings BIBREF32, BIBREF33 have already proven successful in a wide range of NLP tasks. Instead of providing a single vector representation for all words irrespective of the context, contextualized embeddings predict a representation for each word occurrence which depends on its context. These representations are usually based on pre-trained language models. In our setting, we extract the contextualized embeddings for the two candidate categories within the sentence. To obtain this contextualized embedding, we used the last layer of the pre-trained language model, which has been shown to be most suitable for capturing semantic information BIBREF34, BIBREF35. We then use the concatenation of these two contextualized embeddings as the representation of the sentence. For both strategies, we average their corresponding sentence-level representations across all sentences in which the same two candidate categories are mentioned. Finally, we train an SVM classifier on the resulting vectors to predict for the pair of siblings $(A,B)$ whether $s_{AB}> 0.5$ holds. <<</Step 2: Predicting Conceptual Neighborhood from Text>>> <<<Step 3: Category Induction>>> Let $C$ be a category and assume that $N_1,...,N_k$ are conceptual neighbors of this category. Then we can model $C$ by generalizing the idea underpinning the GLR classifier. In particular, we first learn a Gaussian distribution from all the instances of $C$ and $N_1,...,N_k$. This Gaussian model allows us to estimate the probability $P(C\cup N_1\cup ...\cup N_k \,|\, \mathbf {e})$ that $e$ belongs to one of $C,N_1,...,N_k$. If this probability is sufficiently high (i.e. higher than 0.5), we use a multinomial logistic regression classifier to decide which of these categories $e$ is most likely to belong to. Geometrically, we can think of the Gaussian model as capturing the relevant local domain, while the multinomial logistic regression model carves up this local domain, similar as in Figure FIGREF2. In practice, we do not know with certainty which categories are conceptual neighbors of $C$. Instead, we select the $k$ categories (for some fixed constant $k$), among all the siblings of $C$, which are most likely to be conceptual neighbors, according to the text classifier from Section SECREF12. <<</Step 3: Category Induction>>> <<</Model Description>>> <<<Experiments>>> The central problem we consider is category induction: given some instances of a category, predict which other individuals are likely to be instances of that category. When enough instances are given, standard approaches such as the Gaussian classifier from Section UNKREF9, or even a simple SVM classifier, can perform well on this task. For many categories, however, we only have access to a few instances, either because the considered ontology is highly incomplete or because the considered category only has few actual instances. The main research question which we want to analyze is whether (predicted) conceptual neighborhood can help to obtain better category induction models in such cases. In Section SECREF16, we first provide more details about the experimental setting that we followed. Section SECREF23 then discusses our main quantitative results. Finally, in Section SECREF26 we present a qualitative analysis. <<<Experimental setting>>> <<<Taxonomy>>> As explained in Section SECREF3, we used BabelNet BIBREF29 as our reference taxonomy. BabelNet is a large-scale full-fledged taxonomy consisting of heterogeneous sources such as WordNet BIBREF36, Wikidata BIBREF37 and WiBi BIBREF38, making it suitable to test our hypothesis in a general setting. Vector space embeddings. Both the distant labelling method from Section SECREF8 and the category induction model itself need access to vector representations of the considered instances. To this end, we used the NASARI vectors, which have been learned from Wikipedia and are already linked to BabelNet BIBREF1. BabelNet category selection. To test our proposed category induction model, we consider all BabelNet categories with fewer than 50 known instances. This is motivated by the view that conceptual neighborhood is mostly useful in cases where the number of known instances is small. For each of these categories, we split the set of known instances into 90% for training and 10% for testing. To tune the prior probability $\lambda _A$ for these categories, we hold out 10% from the training set as a validation set. The conceptual neighbors among the considered test categories are predicted using the classifier from Section SECREF12. To obtain the distant supervision labels needed to train that classifier, we consider all BabelNet categories with at least 50 instances. This ensures that the distant supervision labels are sufficiently accurate and that there is no overlap with the categories which are used for evaluating the model. Text classifier training. As the text corpus to extract sentences for category pairs we used the English Wikipedia. In particular, we used the dump of November 2014, for which a disambiguated version is available online. This disambiguated version was constructed using the shallow disambiguation algorithm of BIBREF30 mancini-etal-2017-embedding. As explained in Section SECREF12, for each pair of categories we extracted all the sentences where they co-occur, including a maximum window size of 10 tokens between their occurrences, and 10 tokens to the left and right of the first and second category within the sentence, respectively. For the averaging-based sentence representations we used the 300-dimensional pre-trained GloVe word embeddings BIBREF39. To obtain the contextualized representations we used the pre-trained 768-dimensional BERT-base model BIBREF33.. The text classifier is trained on 3,552 categories which co-occur at least once in the same sentence in the Wikipedia corpus, using the corresponding scores $s_{AB}$ as the supervision signal (see Section SECREF12). To inspect how well conceptual neighborhood can be predicted from text, we performed a 10-fold cross validation over the training data, removing for this experiment the unclear cases (i.e., those category pairs with $s_{AB}$ scores between $0.4$ and $0.6$). We also considered a simple baselineWE based on the number of co-occurring sentences for each pairs, which we might expect to be a reasonably strong indicator of conceptual neighborhood, i.e. the more often two categories are mentiond in the same sentence, the more likely that they are conceptual neighbors. The results for this cross-validation experiment are summarized in Table TABREF22. Surprisingly, perhaps, the word vector averaging method seems more robust overall, while being considerably faster than the method using BERT. The results also confirm the intuition that the number of co-occurring sentences is positively correlated with conceptual neighborhood, although the results for this baseline are clearly weaker than those for the proposed classifiers. Baselines. To put the performance of our model in perspective, we consider three baseline methods for category induction. First, we consider the performance of the Gaussian classifier from Section UNKREF9, as a representative example of how well we can model each category when only considering their given instances; this model will be referred to as Gauss. Second, we consider a variant of the proposed model in which we assume that all siblings of the category are conceptual neighbors; this model will be referred to as Multi. Third, we consider a variant of our model in which the neighbors are selected based on similarity. To this end, we represent each BabelNet as their vector from the NASARI space. From the set of siblings of the target category $C$, we then select the $k$ categories whose vector representation is most similar to that of $C$, in terms of cosine similarity. This baseline will be referred to as Similarity$_k$, with $k$ the number of selected neighbors. We refer to our model as SECOND-WEA$_k$ or SECOND-BERT$_k$ (SEmantic categories with COnceptual NeighborhooD), depending on whether the word embedding averaging strategy is used or the method using BERT. <<</Taxonomy>>> <<</Experimental setting>>> <<<Quantitative Results>>> Our main results for the category induction task are summarized in Table TABREF24. In this table, we show results for different choices of the number of selected conceptual neighbors $k$, ranging from 1 to 5. As can be seen from the table, our approach substantially outperforms all baselines, with Multi being the most competitive baseline. Interestingly, for the Similarity baseline, the higher the number of neighbors, the more the performance approaches that of Multi. The relatively strong performance of Multi shows that using the siblings of a category in the BabelNet taxonomy is in general useful. However, as our results show, better results can be obtained by focusing on the predicted conceptual neighbors only. It is interesting to see that even selecting a single conceptual neighbor is already sufficient to substantially outperform the Gaussian model, although the best results are obtained for $k=4$. Comparing the WEA and BERT variants, it is notable that BERT is more successful at selecting the single best conceptual neighbor (reflected in an F1 score of 47.0 compared to 41.9). However, for $k \ge 2$, the results of the WEA and BERT are largely comparable. <<</Quantitative Results>>> <<<Qualitative Analysis>>> To illustrate how conceptual neighborhood can improve classification results, Fig. FIGREF25 shows the two first principal components of the embeddings of the instances of three BabelNet categories: Songbook, Brochure and Guidebook. All three categories can be considered to be conceptual neighbors. Brochure and Guidebook are closely related categories, and we may expect there to exist borderline cases between them. This can be clearly seen in the figure, where some instances are located almost exactly on the boundary between the two categories. On the other hand, Songbook is slightly more separated in the space. Let us now consider the left-most data point from the Songbook test set, which is essentially an outlier, being more similar to instances of Guidebook than typical Songbook instances. When using a Gaussian model, this data point would not be recognised as a plausible instance. When incorporating the fact that Brochure and Guidebook are conceptual neighbors of Songbook, however, it is more likely to be classified correctly. To illustrate the notion of conceptual neighborhood itself, Table TABREF27 displays some selected category pairs from the training set (i.e. the category pairs that were used to train the text classifier), which intuitively correspond to conceptual neighbors. The left column contains some selected examples of category pairs with a high $s_{AB}$ score of at least 0.9. As these examples illustrate, we found that a high $s_{AB}$ score was indeed often predictive of conceptual neighborhood. As the right column of this table illustrates, there are several category pairs with a lower $s_{AB}$ score of around 0.5 which intuitively still seem to correspond to conceptual neighbors. When looking at category pairs with even lower scores, however, conceptual neighborhood becomes rare. Moreover, while there are several pairs with high scores which are not actually conceptual neighbors (e.g. the pair Actor – Makup Artist), they tend to be categories which are still closely related. This means that the impact of incorrectly treating them as conceptual neighbors on the performance of our method is likely to be limited. On the other hand, when looking at category pairs with a very low confidence score we find many unrelated pairs, which we can expect to be more harmful when considered as conceptual neighbors, as the combined Gaussian will then cover a much larger part of the space. Some examples of such pairs include Primary school – Financial institution, Movie theatre – Housing estate, Corporate title – Pharaoh and Fraternity – Headquarters. Finally, in Tables TABREF28 and TABREF29, we show examples of the top conceptual neighbors that were selected for some categories from the test set. Table TABREF28 shows examples of BabelNet categories for which the F1 score of our SECOND-WEA$_1$ classifier was rather low. As can be seen, the conceptual neighbors that were chosen in these cases are not suitable. For instance, Bachelor's degree is a near-synonym of Undergraduate degree, hence assuming them to be conceptual neighbors would clearly be detrimental. In contrast, when looking at the examples in Table TABREF29, where categories are shown with a higher F1 score, we find examples of conceptual neighbors that are intuitively much more meaningful. <<</Qualitative Analysis>>> <<</Experiments>>> <<<Conclusions>>> We have studied the role of conceptual neighborhood for modelling categories, focusing especially on categories with a relatively small number of instances, for which standard modelling approaches are challenging. To this end, we have first introduced a method for predicting conceptual neighborhood from text, by taking advantage of BabelNet to implement a distant supervision strategy. We then used the resulting classifier to identify the most likely conceptual neighbors of a given target category, and empirically showed that incorporating these conceptual neighbors leads to a better performance in a category induction task. In terms of future work, it would be interesting to look at other types of lexical relations that can be predicted from text. One possible strategy would be to predict conceptual betweenness, where a category $B$ is said to be between $A$ and $C$ if $B$ has all the properties that $A$ and $C$ have in common BIBREF40 (e.g. we can think of wine as being conceptually between beer and rum). In particular, if $B$ is predicted to be conceptually between $A$ and $C$ then we would also expect the region modelling $B$ to be between the regions modelling $A$ and $C$. Acknowledgments. Jose Camacho-Collados, Luis Espinosa-Anke and Steven Schockaert were funded by ERC Starting Grant 637277. Zied Bouraoui was supported by CNRS PEPS INS2I MODERN. <<</Conclusions>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nRelated Work\nModel Description\nStep 1: Predicting Conceptual Neighborhood from Embeddings\nGenerating Distant Supervision Labels\nStep 2: Predicting Conceptual Neighborhood from Text\nStep 3: Category Induction\nExperiments\nExperimental setting\nTaxonomy\nQuantitative Results\nQualitative Analysis\nConclusions" ], "type": "outline" }
1912.01679
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Deep Contextualized Acoustic Representations For Semi-Supervised Speech Recognition <<<Abstract>>> We propose a novel approach to semi-supervised automatic speech recognition (ASR). We first exploit a large amount of unlabeled audio data via representation learning, where we reconstruct a temporal slice of filterbank features from past and future context frames. The resulting deep contextualized acoustic representations (DeCoAR) are then used to train a CTC-based end-to-end ASR system using a smaller amount of labeled audio data. In our experiments, we show that systems trained on DeCoAR consistently outperform ones trained on conventional filterbank features, giving 42% and 19% relative improvement over the baseline on WSJ eval92 and LibriSpeech test-clean, respectively. Our approach can drastically reduce the amount of labeled data required; unsupervised training on LibriSpeech then supervision with 100 hours of labeled data achieves performance on par with training on all 960 hours directly. <<</Abstract>>> <<<Introduction>>> Current state-of-the-art models for speech recognition require vast amounts of transcribed audio data to attain good performance. In particular, end-to-end ASR models are more demanding in the amount of training data required when compared to traditional hybrid models. While obtaining a large amount of labeled data requires substantial effort and resources, it is much less costly to obtain abundant unlabeled data. For this reason, semi-supervised learning (SSL) is often used when training ASR systems. The most commonly-used SSL approach in ASR is self-training BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. In this approach, a smaller labeled set is used to train an initial seed model, which is applied to a larger amount of unlabeled data to generate hypotheses. The unlabeled data with the most reliable hypotheses are added to the training data for re-training. This process is repeated iteratively. However, self-training is sensitive to the quality of the hypotheses and requires careful calibration of the confidence measures. Other SSL approaches include: pre-training on a large amount of unlabeled data with restricted Boltzmann machines (RBMs) BIBREF5; entropy minimization BIBREF6, BIBREF7, BIBREF8, where the uncertainty of the unlabeled data is incorporated as part of the training objective; and graph-based approaches BIBREF9, where the manifold smoothness assumption is exploited. Recently, transfer learning from large-scale pre-trained language models (LMs) BIBREF10, BIBREF11, BIBREF12 has shown great success and achieved state-of-the-art performance in many NLP tasks. The core idea of these approaches is to learn efficient word representations by pre-training on massive amounts of unlabeled text via word completion. These representations can then be used for downstream tasks with labeled data. Inspired by this, we propose an SSL framework that learns efficient, context-aware acoustic representations using a large amount of unlabeled data, and then applies these representations to ASR tasks using a limited amount of labeled data. In our implementation, we perform acoustic representation learning using forward and backward LSTMs and a training objective that minimizes the reconstruction error of a temporal slice of filterbank features given previous and future context frames. After pre-training, we fix these parameters and add output layers with connectionist temporal classification (CTC) loss for the ASR task. The paper is organized as follows: in Section SECREF2, we give a brief overview of related work in acoustic representation learning and SSL. In Section SECREF3, we describe an implementation of our SSL framework with DeCoAR learning. We describe the experimental setup in Section SECREF4 and the results on WSJ and LibriSpeech in Section SECREF5, followed by our conclusions in Section SECREF6. <<</Introduction>>> <<<Related work>>> While semi-supervised learning has been exploited in a plethora of works in hybrid ASR system, there are very few work done in the end-to-end counterparts BIBREF3, BIBREF13, BIBREF14. In BIBREF3, an intermediate representation of speech and text is learned via a shared encoder network. To train these representation, the encoder network was trained to optimize a combination of ASR loss, text-to-text autoencoder loss and inter-domain loss. The latter two loss functions did not require paired speech and text data. Learning efficient acoustic representation can be traced back to restricted Boltzmann machine BIBREF15, BIBREF16, BIBREF17, which allows pre-training on large amounts of unlabeled data before training the deep neural network acoustic models. More recently, acoustic representation learning has drawn increasing attention BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23 in speech processing. For example, an autoregressive predictive coding model (APC) was proposed in BIBREF20 for unsupervised speech representation learning and was applied to phone classification and speaker verification. WaveNet auto-encoders BIBREF21 proposed contrastive predictive coding (CPC) to learn speech representations and was applied on unsupervised acoustic unit discovery task. Wav2vec BIBREF22 proposed a multi-layer convolutional neural network optimized via a noise contrastive binary classification and was applied to WSJ ASR tasks. Unlike the speech representations described in BIBREF22, BIBREF20, our representations are optimized to use bi-directional contexts to auto-regressively reconstruct unseen frames. Thus, they are deep contextualized representations that are functions of the entire input sentence. More importantly, our work is a general semi-supervised training framework that can be applied to different systems and requires no architecture change. <<</Related work>>> <<<DEep COntextualized Acoustic Representations>>> <<<Representation learning from unlabeled data>>> Our approach is largely inspired by ELMo BIBREF10. In ELMo, given a sequence of $T$ tokens $(w_1,w_2,...,w_T)$, a forward language model (implemented with an LSTM) computes its probability using the chain rule decomposition: Similarly, a backward language model computes the sequence probability by modeling the probability of token $w_t$ given its future context $w_{t+1},\cdots , w_T$ as follows: ELMo is trained by maximizing the joint log-likelihood of both forward and backward language model probabilities: where $\Theta _x$ is the parameter for the token representation layer, $\Theta _s$ is the parameter for the softmax layer, and $\overrightarrow{\Theta }_{\text{LSTM}}$, $\overleftarrow{\Theta }_{\text{LSTM}}$ are the parameters of forward and backward LSTM layers, respectively. As the word representations are learned with neural networks that use past and future information, they are referred to as deep contextualized word representations. For speech processing, predicting a single frame $\mathbf {x}_t$ may be a trivial task, as it could be solved by exploiting the temporal smoothness of the signal. In the APC model BIBREF20, the authors propose predicting a frame $K$ steps ahead of the current one. Namely, the model aims to minimize the $\ell _1$ loss between an acoustic feature vector $\mathbf {x}$ at time $t+K$ and a reconstruction $\mathbf {y}$ predicted at time $t$: $\sum _{t=1}^{T-K} |\mathbf {x}_{t+K} - \mathbf {y}_t|$. They conjectured this would induce the model to learn more global structure rather than simply leveraging local information within the signal. We propose combining the bidirectionality of ELMo and the reconstruction objective of APC to give deep contextualized acoustic representations (DeCoAR). We train the model to predict a slice of $K$ acoustic feature vectors, given past and future acoustic vectors. As depicted on the left side of Figure FIGREF1, a stack of forward and backward LSTMs are applied to the entire unlabeled input sequence $\mathbf {X} = (\mathbf {x}_1,\cdots ,\mathbf {x}_T)$. The network computes a hidden representation that encodes information from both previous and future frames (i.e. $\overrightarrow{\mathbf {z}}_t, \overleftarrow{\mathbf {z}}_t$) for each frame $\mathbf {x}_t$. Given a sequence of acoustic feature inputs $(\mathbf {x}_1, ..., \mathbf {x}_{T}) \in \mathbb {R}^d$, for each slice $(\mathbf {x}_t, \mathbf {x}_{t+1}, ..., \mathbf {x}_{t+K})$ starting at time step $t$, our objective is defined as follows: where $[\overrightarrow{\mathbf {z}}_t; \overleftarrow{\mathbf {z}}_{t}] \in \mathbb {R}^{2h}$ are the concatenated forward and backward states from the last LSTM layer, and is a position-dependent feed-forward network with 512 hidden dimensions. The final loss $\mathcal {L}$ is summed over all possible slices in the entire sequence: Note this can be implemented efficiently as a layer which predicts these $(K+1)$ frames at each position $t$, all at once. We compare with the use of unidirectional LSTMs and various slice sizes in Section SECREF5. <<</Representation learning from unlabeled data>>> <<<End-to-end ASR training with labeled data>>> After we have pre-trained the DeCoAR on unlabeled data, we freeze the parameters in the architecture. To train an end-to-end ASR system using labeled data, we remove the reconstruction layer and add two BLSTM layers with CTC loss BIBREF24, as illustrated on the right side of Figure FIGREF1. The DeCoAR vectors induced by the labeled data in the forward and backward layers are concatenated. We fine-tune the parameters of this ASR-specific new layer on the labeled data. While we use LSTMs and CTC loss in our implementation, our SSL approach should work for other layer choices (e.g. TDNN, CNN, self-attention) and other downstream ASR models (e.g. hybrid, seq2seq, RNN transducers) as well. <<</End-to-end ASR training with labeled data>>> <<</DEep COntextualized Acoustic Representations>>> <<<Experimental Setup>>> <<<Data>>> We conducted our experiments on the WSJ and LibriSpeech datasets, pre-training by using one of the two training sets as unlabeled data. To simulate the SSL setting in WSJ, we used 30%, 50% as well as 100% of labeled data for ASR training, consisting of 25 hours, 40 hours, and 81 hours, respectively. We used dev93 for validation and eval92 and evaluation. For LibriSpeech, the amount of training data used varied from 100 hours to the entire 960 hours. We used dev-clean for validation and test-clean, test-other for evaluation. <<</Data>>> <<<ASR systems>>> Our experiments consisted of three different setups: 1) a fully-supervised system using all labeled data; 2) an SSL system using wav2vec features; 3) an SSL system using our proposed DeCoAR features. All models used were based on deep BLSTMs with the CTC loss criterion. In the supervised ASR setup, we used conventional log-mel filterbank features, which were extracted with a 25ms sliding window at a 10ms frame rate. The features were normalized via mean subtraction and variance normalization on a per-speaker basis. The model had 6 BLSTM layers, with 512 cells in each direction. We found that increasing the number of cells to a larger number did not further improve the performance and thus used it as our best supervised ASR baseline. The output CTC labels were 71 phonemes plus one blank symbol. In the SSL ASR setup, we pre-trained a 4-layer BLSTM (1024 cells per sub-layer) to learn DeCoAR features according to the loss defined in Equation DISPLAY_FORM4 and use a slice size of 18. We optimized the network with SGD and use a Noam learning rate schedule, where we started with a learning rate of 0.001, gradually warm up for 500 updates, and then perform inverse square-root decay. We grouped the input sequences by length with a batch size of 64, and trained the models on 8 GPUs. After the representation network was trained, we froze the parameters, and added a projection layer, followed by 2-layer BLSTM with CTC loss on top it. We fed the labeled data to the network. For comparison, we obtained 512-dimensional wav2vec representations BIBREF22 from the wav2vec-large model. Their model was pre-trained on 960-hour LibriSpeech data with constrastive loss and had 12 convolutional layers with skip connections. For evaluation purposes, we applied WFST-based decoding using EESEN BIBREF25. We composed the CTC labels, lexicons and language models (unpruned trigram LM for WSJ, 4-gram for LibriSpeech) into a decoding graph. The acoustic model score was set to $0.8$ and $1.0$ for WSJ and LibriSpeech, respectively, and the blank symbol prior scale was set to $0.3$ for both tasks. We report the performance in word error rate (WER). <<</ASR systems>>> <<</Experimental Setup>>> <<<Results>>> <<<Semi-supervised WSJ results>>> Table TABREF14 shows our results on semi-supervised WSJ. We demonstrate that DeCoAR feature outperforms filterbank and wav2vec features, with a relative improvement of 42% and 20%, respectively. The lower part of the table shows that with smaller amounts of labeled data, the DeCoAR features are significantly better than the filterbank features: Compared to the system trained on 100% labeled data with filterbank features, we achieve comparable results on eval92 using 30% of the labeled data and better performance on eval92 using 50% of the labeled data. <<</Semi-supervised WSJ results>>> <<<Semi-supervised LibriSpeech results>>> Table TABREF7 shows the results on semi-supervised LibriSpeech. Both our representations and wav2vecBIBREF22 are trained on 960h LibriSpeech data. We conduct our semi-supervised experiments using 100h (train-clean-100), 360h (train-clean-360), 460h, and 960h of training data. Our approach outperforms both the baseline and wav2vec model in each SSL scenario. One notable observation is that using only 100 hours of transcribed data achieves very similar performance to the system trained on the full 960-hour data with filterbank features. On the more challenging test-other dataset, we also achieve performance on par with the filterbank baseline using a 360h subset. Furthermore, training with with our DeCoAR features approach improves the baseline even when using the exact same training data (960h). Note that while BIBREF26 introduced SpecAugment to significantly improve LibriSpeech performance via data augmentation, and BIBREF27 achieved state-of-the-art results using both hybrid and end-to-end models, our approach focuses on the SSL case with less labeled training data via our DeCoAR features. <<</Semi-supervised LibriSpeech results>>> <<<Ablation Study and Analysis>>> <<<Context window size>>> We study the effect of the context window size during pre-training. Table TABREF20 shows that masking and predicting a larger slice of frames can actually degrade performance, while increasing training time. A similar effect was found in SpanBERT BIBREF28, another deep contextual word representation which found that masking a mean span of 3.8 consecutive words was ideal for their word reconstruction objective. <<</Context window size>>> <<<Unidirectional versus bidirectional context>>> Next, we study the importance of bidirectional context by training a unidirectional LSTM, which corresponds to only using $\overrightarrow{\mathbf {z}}_t$ to predict $\mathbf {x}_t, \cdots , \mathbf {x}_{t+K}$. Table TABREF22 shows that this unidirectional model achieves comparable performance to the wav2vec model BIBREF22, suggesting that bidirectionality is the largest contributor to DeCoAR's improved performance. <<</Unidirectional versus bidirectional context>>> <<<DeCoAR as denoiser>>> Since our model is trained by predicting masked frames, DeCoAR has the side effect of learning decoder feed-forward networks $\text{FFN}_i$ which reconstruct the $(t+i)$-th filterbank frame from contexts $\overrightarrow{\mathbf {z}}_t$ and $\overleftarrow{\mathbf {z}}_{t+K}$. In this section, we consider the spectrogram reconstructed by taking the output of $\text{FFN}_i$ at all times $t$. The qualitative result is depicted in Figure FIGREF15 where the slice size is 18. We see that when $i=0$ (i.e., when reconstructing the $t$-th frame from $[\overrightarrow{\mathbf {z}}_t; \overleftarrow{\mathbf {z}}_{t+K}]$), the reconstruction is almost perfect. However, as soon as one predicts unseen frames $i=1, 4, 8$ (of 16), the reconstruction becomes more simplistic, but not by much. Background energy in the silent frames 510-550 is zeroed out. By $i=8$ artifacts begin to occur, such as an erroneous sharp band of energy being predicted around frame 555. This behavior is compatible with recent NLP works that interpret contextual word representations as denoising autoencoders BIBREF12. The surprising ability of DeCoAR to broadly reconstruct a frame $\overrightarrow{\mathbf {x}}_{t+{K/2}}$ in the middle of a missing 16-frame slice suggests that its representations $[\overrightarrow{\mathbf {z}}_t; \overleftarrow{\mathbf {z}}_{t+K}]$ capture longer-term phonetic structure during unsupervised pre-training, as with APC BIBREF20. This motivates its success in the semi-supervised ASR task with only two additional layers, as it suggests DeCoAR learns phonetic representations similar to those likely learned by the first 4 layers of a corresponding end-to-end ASR model. <<</DeCoAR as denoiser>>> <<</Ablation Study and Analysis>>> <<</Results>>> <<<Conclusion>>> In this paper, we introduce a novel semi-supervised learning approach for automatic speech recognition. We first propose a novel objective for a deep bidirectional LSTM network, where large amounts of unlabeled data are used to learn deep contextualized acoustic representations (DeCoAR). These DeCoAR features are used as the representations of labeled data to train a CTC-based end-to-end ASR model. In our experiments, we show a 42% relative improvement on WSJ compared to a baseline trained on log-mel filterbank features. On LibriSpeech, we achieve similar performance to training on 960 hours of labeled by pretraining then using only 100 hours of labeled data. While we use BLSTM-CTC as our ASR model, our approach can be applied to other end-to-end ASR models. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nRelated work\nDEep COntextualized Acoustic Representations\nRepresentation learning from unlabeled data\nEnd-to-end ASR training with labeled data\nExperimental Setup\nData\nASR systems\nResults\nSemi-supervised WSJ results\nSemi-supervised LibriSpeech results\nAblation Study and Analysis\nContext window size\nUnidirectional versus bidirectional context\nDeCoAR as denoiser\nConclusion" ], "type": "outline" }
2004.03061
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Information-Theoretic Probing for Linguistic Structure <<<Abstract>>> The success of neural networks on a diverse set of NLP tasks has led researchers to question how much do these networks actually know about natural language. Probes are a natural way of assessing this. When probing, a researcher chooses a linguistic task and trains a supervised model to predict annotation in that linguistic task from the network's learned representations. If the probe does well, the researcher may conclude that the representations encode knowledge related to the task. A commonly held belief is that using simpler models as probes is better; the logic is that such models will identify linguistic structure, but not learn the task itself. We propose an information-theoretic formalization of probing as estimating mutual information that contradicts this received wisdom: one should always select the highest performing probe one can, even if it is more complex, since it will result in a tighter estimate. The empirical portion of our paper focuses on obtaining tight estimates for how much information BERT knows about parts of speech in a set of five typologically diverse languages that are often underrepresented in parsing research, plus English, totaling six languages. We find BERT accounts for only at most 5% more information than traditional, type-based word embeddings. <<</Abstract>>> <<<Introduction>>> Neural networks are the backbone of modern state-of-the-art Natural Language Processing (NLP) systems. One inherent by-product of training a neural network is the production of real-valued representations. Many speculate that these representations encode a continuous analogue of discrete linguistic properties, e.g., part-of-speech tags, due to the networks' impressive performance on many NLP tasks BIBREF0. As a result of this speculation, one common thread of research focuses on the construction of probes, i.e., supervised models that are trained to extract the linguistic properties directly BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. A syntactic probe, then, is a model for extracting syntactic properties, such as part-of-speech, from the representations BIBREF6. In this work, we question what the goal of probing for linguistic properties ought to be. Informally, probing is often described as an attempt to discern how much information representations encode about a specific linguistic property. We make this statement more formal: We assert that the goal of probing ought to be estimating the mutual information BIBREF7 between a representation-valued random variable and a linguistic property-valued random variable. This formulation gives probing a clean, information-theoretic foundation, and allows us to consider what “probing” actually means. Our analysis also provides insight into how to choose a probe family: We show that choosing the highest-performing probe, independent of its complexity, is optimal for achieving the best estimate of mutual information (MI). This contradicts the received wisdom that one should always select simple probes over more complex ones BIBREF8, BIBREF9, BIBREF10. In this context, we also discuss the recent work of hewitt-liang-2019-designing who propose selectivity as a criterion for choosing families of probes. hewitt-liang-2019-designing define selectivity as the performance difference between a probe on the target task and a control task, writing “[t]he selectivity of a probe puts linguistic task accuracy in context with the probe's capacity to memorize from word types.” They further ponder: “when a probe achieves high accuracy on a linguistic task using a representation, can we conclude that the representation encodes linguistic structure, or has the probe just learned the task?” Information-theoretically, there is no difference between learning the task and probing for linguistic structure, as we will show; thus, it follows that one should always employ the best possible probe for the task without resorting to artificial constraints. In support of our discussion, we empirically analyze word-level part-of-speech labeling, a common syntactic probing task BIBREF6, BIBREF11, within our framework. Working on a typologically diverse set of languages (Basque, Czech, English, Finnish, Tamil, and Turkish), we show that the representations from BERT, a common contextualized embedder, only account for at most $5\%$ more of the part-of-speech tag entropy than a control. These modest improvements suggest that most of the information needed to tag part-of-speech well is encoded at the lexical level, and does not require the sentential context of the word. Put more simply, words are not very ambiguous with respect to part of speech, a result known to practitioners of NLP BIBREF12. We interpret this to mean that part-of-speech labeling is not a very informative probing task. We also remark that formulating probing information-theoretically gives us a simple, but stunning result: contextual word embeddings, e.g., BERT BIBREF13 and ELMo BIBREF14, contain the same amount of information about the linguistic property of interest as the original sentence. This follows naturally from the data-processing inequality under a very mild assumption. What this suggests is that, in a certain sense, probing for linguistic properties in representations may not be a well grounded enterprise at all. <<</Introduction>>> <<<Word-Level Syntactic Probes for Contextual Embeddings>>> Following hewitt-liang-2019-designing, we consider probes that examine syntactic knowledge in contextualized embeddings. These probes only consider a single token's embedding and try to perform the task using only that information. Specifically, in this work, we consider part-of-speech (POS) labeling: determining a word's part of speech in a given sentence. For example, we wish to determine whether the word love is a noun or a verb. This task requires the sentential context for success. As an example, consider the utterance “love is blind” where, only with the context, is it clear that love is a noun. Thus, to do well on this task, the contextualized embeddings need to encode enough about the surrounding context to correctly guess the POS. <<<Notation>>> Let $S$ be a random variable ranging over all possible sequences of words. For the sake of this paper, we assume the vocabulary $\mathcal {V}$ is finite and, thus, the values $S$ can take are in $\mathcal {V}^*$. We write $\mathbf {s}\in S$ as $\mathbf {s}= w_1 \cdots w_{|\mathbf {s}|}$ for a specific sentence, where each $w_i \in \mathcal {V}$ is a specific word in the sentence and the position $i \in \mathbb {N}^{+}$. We also define the random variable $W$ that ranges over the vocabulary $\mathcal {V}$. We define both a sentence-level random variable $S$ and a word-level random variable $W$ since each will be useful in different contexts during our exposition. Next, let $T$ be a random variable whose possible values are the analyses $t$ that we want to consider for word $w_i$ in its sentential context, $\mathbf {s}= w_1 \cdots w_i \cdots w_{|\mathbf {s}|}$. In this work, we will focus on predicting the part-of-speech tag of the $i^\text{th}$ word $w_i$. We denote the set of values $T$ can take as the set $\mathcal {T}$. Finally, let $R$ be a representation-valued random variable for the $i^\text{th}$ word $w_i$ in a sentence derived from the entire sentence $\mathbf {s}$. We write $\mathbf {r}\in \mathbb {R}^d$ for a value of $R$. While any given value $\mathbf {r}$ is a continuous vector, there are only a countable number of values $R$ can take. To see this, note there are only a countable number of sentences in $\mathcal {V}^*$. Next, we assume there exists a true distribution $p(t, \mathbf {s}, i)$ over analyses $t$ (elements of $\mathcal {T}$), sentences $\mathbf {s}$ (elements of $\mathcal {V}^*$), and positions $i$ (elements of $\mathbb {N}^{+}$). Note that the conditional distribution $p(t \mid \mathbf {s}, i)$ gives us the true distribution over analyses $t$ for the $i^{\text{th}}$ word in the sentence $\mathbf {s}$. We will augment this distribution such that $p$ is additionally a distribution over $\mathbf {r}$, i.e., where we define the augmentation as a Dirac's delta function Since contextual embeddings are a deterministic function of a sentence $\mathbf {s}$, the augmented distribution in eq:true has no more randomness than the original—its entropy is the same. We assume the values of the random variables defined above are distributed according to this (unknown) $p$. While we do not have access to $p$, we assume the data in our corpus were drawn according to it. Note that $W$—the random variable over possible words—is distributed according to the marginal distribution where we define the deterministic distribution <<</Notation>>> <<<Probing as Mutual Information>>> The task of supervised probing is an attempt to ascertain how much information a specific representation $\mathbf {r}$ tells us about the value of $t$. This is naturally expressed as the mutual information, a quantity from information theory: where we define the entropy, which is constant with respect to the representations, as and where we define the conditional entropy as the point-wise conditional entropy inside the sum is defined as Again, we will not know any of the distributions required to compute these quantities; the distributions in the formulae are marginals and conditionals of the true distribution discussed in eq:true. <<</Probing as Mutual Information>>> <<<Bounding Mutual Information>>> The desired conditional entropy, $\mathrm {H}(T \mid R)$ is not readily available, but with a model $q_{{\theta }}(\mathbf {t}\mid \mathbf {r})$ in hand, we can upper-bound it by measuring their empirical cross entropy where $\mathrm {H}_{q_{{\theta }}}(T \mid R)$ is the cross-entropy we obtain by using $q_{{\theta }}$ to get this estimate. Since the KL divergence is always positive, we may lower-bound the desired mutual information This bound gets tighter, the more similar (in the sense of the KL divergence) $q_{{\theta }}(\cdot \mid \mathbf {r})$ is to the true distribution $p(\cdot \mid \mathbf {r})$. <<<Bigger Probes are Better.>>> If we accept mutual information as a natural measure for how much representations encode a target linguistic task (§SECREF6), then the best estimate of that mutual information is the one where the probe $q_{{\theta }}(t \mid \mathbf {r})$ is best at the target task. In other words, we want the best probe $q_{{\theta }}(t \mid \mathbf {r})$ such that we get the tightest bound to the actual distribution $p(t\mid \mathbf {r})$. This paints the question posed by hewitt-liang-2019-designing, who write “when a probe achieves high accuracy on a linguistic task using a representation, can we conclude that the representation encodes linguistic structure, or has the probe just learned the task?” as a false dichotomy. From an information-theoretic view, we will always prefer the probe that does better at the target task, since there is no difference between learning a task and the representations encoding the linguistic structure. <<</Bigger Probes are Better.>>> <<</Bounding Mutual Information>>> <<</Word-Level Syntactic Probes for Contextual Embeddings>>> <<<Control Functions>>> To place the performance of a probe in perspective, hewitt-liang-2019-designing develop the notion of a control task. Inspired by this, we develop an analogue we term control functions, which are functions of the representation-valued random variable $R$. Similar to hewitt-liang-2019-designing's control tasks, the goal of a control function $\mathbf {c}(\cdot )$ is to place the mutual information $\mathrm {I}(T; R)$ in the context of a baseline that the control function encodes. Control functions have their root in the data-processing inequality BIBREF7, which states that, for any function $\mathbf {c}(\cdot )$, we have In other words, information can only be lost by processing data. A common adage associated with this inequality is “garbage in, garbage out.” <<<Type-Level Control Functions>>> We will focus on type-level control functions in this paper; these functions have the effect of decontextualizing the embeddings. Such functions allow us to inquire how much the contextual aspect of the contextual embeddings help the probe perform the target task. To show that we may map from contextual embeddings to the identity of the word type, we need the following assumption about the embeddings. Assumption 1 Every contextualized embedding is unique, i.e., for any pair of sentences $\mathbf {s}, \mathbf {s}^{\prime } \in \mathcal {V}^*$, we have $(\mathbf {s}\ne \mathbf {s}^{\prime }) \mid \mid (i \ne j) \Rightarrow \textsc {bert} (\mathbf {s})_i \ne \textsc {bert} (\mathbf {s}^{\prime })_j$ for all $i \in \lbrace 1, \ldots |\mathbf {s}|\rbrace $ and $j \in \lbrace 1, \ldots , |\mathbf {s}^{\prime }|\rbrace $. We note that ass:one is mild. Contextualized word embeddings map words (in their context) to $\mathbb {R}^d$, which is an uncountably infinite space. However, there are only a countable number of sentences, which implies only a countable number of sequences of real vectors in $\mathbb {R}^d$ that a contextualized embedder may produce. The event that any two embeddings would be the same across two distinct sentences is infinitesimally small. ass:one yields the following corollary. Corollary 1 There exists a function $\emph {\texttt {id} } : \mathbb {R}^d \rightarrow V$ that maps a contextualized embedding to its word type. The function $\emph {\texttt {id} }$ is not a bijection since multiple embeddings will map to the same type. Using cor:one, we can show that any non-contextualized word embedding will contain no more information than a contextualized word embedding. More formally, we do this by constructing a look-up function $\mathbf {e}: V \rightarrow \mathbb {R}^d$ that maps a word to a word embedding. This embedding may be one-hot, randomly generated ahead of time, or the output of a data-driven embedding method, e.g. fastText BIBREF15. We can then construct a control function as the composition of the look-up function $\mathbf {e}$ and the id function $\texttt {id} $. Using the data-processing inequality, we can prove that in a word-level prediction task, any non-contextual (type level) word-embedding will contain no more information than a contextualized (token level) one, such as BERT and ELMo. Specifically, we have This result is intuitive and, perhaps, trivial—context matters information-theoretically. However, it gives us a principled foundation by which to measure the effectiveness of probes as we will show in sec:gain. <<</Type-Level Control Functions>>> <<<How Much Information Did We Gain?>>> We will now quantify how much a contextualized word embedding knows about a task with respect to a specific control function $\mathbf {c}(\cdot )$. We term how much more information the contextualized embeddings have about a task than a control variable the gain, which we define as The gain function will be our method for measuring how much more information contextualized representations have over a controlled baseline, encoded as the function $\mathbf {c}$. We will empirically estimate this value in sec:experiments. Interestingly enough, the gain has a straightforward interpretation. Proposition 1 The gain function is equal to the following conditional mutual information The jump from the first to the second equality follows since $R$ encodes all the information about $T$ provided by $\mathbf {c}(R)$ by construction. prop:interpretation gives us a clear understanding of the quantity we wish to estimate: It is how much information about a task is encoded in the representations, given some control knowledge. If properly designed, this control transformation will remove information from the probed representations. <<</How Much Information Did We Gain?>>> <<<Approximating the Gain>>> The gain, as defined in eq:gain, is intractable to compute. In this section we derive a pair of variational bounds on $\mathcal {G}(T, R, \mathbf {e})$—one upper and one lower. To approximate the gain, we will simultaneously minimize an upper and a lower-bound on eq:gain. We begin by approximating the gain in the following manner these cross-entropies can be empirically estimated. We will assume access to a corpus $\lbrace (t_i, \mathbf {r}_i)\rbrace _{i=1}^N$ that is human-annotated for the target linguistic property; we further assume that these are samples $(t_i, \mathbf {r}_i) \sim p(\cdot , \cdot )$ from the true distribution. This yields a second approximation that is tractable: This approximation is exact in the limit $N \rightarrow \infty $ by the law of large numbers. We note the approximation given in eq:approx may be either positive or negative and its estimation error follows from eq:entestimate where we abuse the KL notation to simplify the equation. This is an undesired behavior since we know the gain itself is non-negative, by the data-processing inequality, but we have yet to devise a remedy. We justify the approximation in eq:approx with a pair of variational bounds. The following two corollaries are a result of thm:variationalbounds in appendix:a. Corollary 2 We have the following upper-bound on the gain Corollary 3 We have the following lower-bound on the gain The conjunction of cor:upper and cor:lower suggest a simple procedure for finding a good approximation: We choose $q_{{\theta }1}(\cdot \mid r)$ and $q_{{\theta }2}(\cdot \mid r)$ so as to minimize eq:upper and maximize eq:lower, respectively. These distributions contain no overlapping parameters, by construction, so these two optimization routines may be performed independently. We will optimize both with a gradient-based procedure, discussed in sec:experiments. <<</Approximating the Gain>>> <<</Control Functions>>> <<<Understanding Probing Information-Theoretically>>> In sec:control-functions we developed an information-theoretic framework for thinking about probing contextual word embeddings for linguistic structure. However, we now cast doubt on whether probing makes sense as a scientific endeavour. We prove in sec:context that contextualized word embeddings, by construction, contain no more information about a word-level syntactic task than the original sentence itself. Nevertheless, we do find a meaningful scientific interpretation of control functions. We expound upon this in sec:control-functions-meaning, arguing that control functions are useful, not for understanding representations, but rather for understanding the influence of sentential context on word-level syntactic tasks, e.g., labeling words with their part of speech. <<<You Know Nothing, BERT>>> To start, we note the following corollary Corollary 4 It directly follows from ass:one that $\textsc {bert} $ is a bijection between sentences $\mathbf {s}$ and sequences of embeddings $\langle \mathbf {r}_1, \ldots , \mathbf {r}_{|\mathbf {s}|} \rangle $. As $\textsc {bert} $ is a bijection, it has an inverse, which we will denote as $\textsc {bert}^{-1} $. Theorem 1 The function $\textsc {bert} (S)$ cannot provide more information about $T$ than the sentence $S$ itself. This implies $\mathrm {I}(T ; S) = \mathrm {I}(T; \textsc {bert} (S))$. We remark this is not a BERT-specific result—it rests on the fact that the data-processing inequality is tight for bijections. While thm:bert is a straightforward application of the data-processing inequality, it has deeper ramifications for probing. It means that if we search for syntax in the contextualized word embeddings of a sentence, we should not expect to find any more syntax than is present in the original sentence. In a sense, thm:bert is a cynical statement: the endeavour of finding syntax in contextualized embeddings sentences is nonsensical. This is because, under ass:one, we know the answer a priori—the contextualized word embeddings of a sentence contain exactly the same amount of information about syntax as does the sentence itself. <<</You Know Nothing, BERT>>> <<<What Do Control Functions Mean?>>> Information-theoretically, the interpretation of control functions is also interesting. As previously noted, our interpretation of control functions in this work does not provide information about the representations themselves. Actually, the same reasoning used in cor:one could be used to devise a function $\texttt {id} _s(\mathbf {r})$ which led from a single representation back to the whole sentence. For a type-level control function $\mathbf {c}$, by the data-processing inequality, we have that $\mathrm {I}(T; W) \ge \mathrm {I}(T; \mathbf {c}(R))$. Consequently, we can get an upper-bound on how much information we can get out of a decontextualized representation. If we assume we have perfect probes, then we get that the true gain function is $\mathrm {I}(T; S) - \mathrm {I}(T; W) = \mathrm {I}(T; S \mid W)$. This quantity is interpreted as the amount of knowledge we gain about the word-level task $T$ by knowing $S$ (i.e., the sentence) in addition to $W$ (i.e., the word). Therefore, a perfect probe would provide insights about language and not about the actual representations, which are no more than a means to an end. <<</What Do Control Functions Mean?>>> <<<Discussion: Ease of Extraction>>> We do acknowledge another interpretation of the work of hewitt-liang-2019-designing inter alia; BERT makes the syntactic information present in an ordered sequence of words more easily extractable. However, ease of extraction is not a trivial notion to formalize, and indeed, we know of no attempt to do so; it is certainly more complex to determine than the number of layers in a multi-layer perceptron (MLP). Indeed, a MLP with a single hidden layer can represent any function over the unit cube, with the caveat that we may need a very large number of hidden units BIBREF16. Although for perfect probes the above results should hold, in practice $\texttt {id} (\cdot )$ and $\mathbf {c}(\cdot )$ may be hard to approximate. Furthermore, if these functions were to be learned, they might require an unreasonably large dataset. A random embedding control function, for example, would require an infinitely large dataset to be learned—or at least one that contained all words in the vocabulary $V$. “Better” representations should make their respective probes more easily learnable—and consequently their encoded information more accessible. We suggest that future work on probing should focus on operationalizing ease of extraction more rigorously—even though we do not attempt this ourselves. The advantage of simple probes is that they may reveal something about the structure of the encoded information—i.e., is it structured in such a way that it can be easily taken advantage of by downstream consumers of the contextualized embeddings? We suspect that many researchers who are interested in less complex probes have implicitly had this in mind. <<</Discussion: Ease of Extraction>>> <<</Understanding Probing Information-Theoretically>>> <<<A Critique of Control Tasks>>> While this paper builds on the work of hewitt-liang-2019-designing, and we agree with them that we should have control tasks when probing for linguistic properties, we disagree with parts of the methodology for the control task construction. We present these disagreements here. <<<Structure and Randomness>>> hewitt-liang-2019-designing introduce control tasks to evaluate the effectiveness of probes. We draw inspiration from this technique as evidenced by our introduction of control functions. However, we take issue with the suggestion that controls should have structure and randomness, to use the terminology from hewitt-liang-2019-designing. They define structure as “the output for a word token is a deterministic function of the word type.” This means that they are stripping the language of ambiguity with respect to the target task. In the case of part-of-speech labeling, love would either be a noun or a verb in a control task, never both: this is a problem. The second feature of control tasks is randomness, i.e., “the output for each word type is sampled independently at random.” In conjunction, structure and randomness may yield a relatively trivial task that does not look at all like natural language. What is more, there is a closed-form solution for an optimal, retrieval-based “probe” that has zero parameters: If a word type appears in the training set, return the label with which it was annotated there, otherwise return the most frequently occurring label across all words in the training set. This probe will achieve an accuracy that is 1 minus the out-of-vocabulary rate (the number of tokens in the test set that correspond to novel types divided by the number of tokens) times the percentage of tags in the test set that do not correspond to the most frequent tag (the error rate of the guess-the-most-frequent-tag classifier). In short, the best model for a control task is a pure memorizer that guesses the most frequent tag for out-of-vocabulary words. <<</Structure and Randomness>>> <<<What's Wrong with Memorization?>>> hewitt-liang-2019-designing propose that probes should be optimised to maximise accuracy and selectivity. Recall selectivity is given by the distance between the accuracy on the original task and the accuracy on the control task using the same architecture. Given their characterization of control tasks, maximising selectivity leads to a selection of a model that is bad at memorization. But why should we punish memorization? Much of linguistic competence is about generalization, however memorization also plays a key role BIBREF17, BIBREF18, BIBREF19, with word learning BIBREF20 being an obvious example. Indeed, maximizing selectivity as a criterion for creating probes seems to artificially disfavor this property. <<</What's Wrong with Memorization?>>> <<<What Low-Selectivity Means>>> hewitt-liang-2019-designing acknowledge that for the more complex task of dependency edge prediction, a MLP probe is more accurate and, therefore, preferable despite its low selectivity. However, they offer two counter-examples where the less selective neural probe exhibits drawbacks when compared to its more selective, linear counterpart. We believe both examples are a symptom of using a simple probe rather than of selectivity being a useful metric for probe selection. First, [§3.6]hewitt-liang-2019-designing point out that, in their experiments, the MLP-1 model frequently mislabels the word with suffix -s as NNPS on the POS labeling task. They present this finding as a possible example of a less selective probe being less faithful in representing what linguistic information has the model learned. Our analysis leads us to believe that, on contrary, this shows that one should be using the best possible probe to minimize the chance of misrepresentation. Since more complex probes achieve higher accuracy on the task, as evidence by the findings of hewitt-liang-2019-designing, we believe that the overall trend of misrepresentation is higher for the probes with higher selectivity. The same applies for the second example discussed in section [§4.2]hewitt-liang-2019-designing where a less selective probe appears to be less faithful. The authors show that the representations on ELMo's second layer fail to outperform its word type ones (layer zero) on the POS labeling task when using the MLP-1 probe. While they argue this is evidence for selectivity being a useful metric in choosing appropriate probes, we argue that this demonstrates yet again that one needs to use a more complex probe to minimize the chances of misrepresenting what the model has learned. The fact that the linear probe shows a difference only demonstrates that the information is perhaps more accessible with ELMo, not that it is not present; see sec:ease-extract. <<</What Low-Selectivity Means>>> <<</A Critique of Control Tasks>>> <<<Experiments>>> We consider the task of POS labeling and use the universal POS tag information BIBREF21 from the Universal Dependencies 2.4 BIBREF22. We probe the multilingual release of BERT on six typologically diverse languages: Basque, Czech, English, Finnish, Tamil, and Turkish; and we compute the contextual representations of each sentence by feeding it into BERT and averaging the output word piece representations for each word, as tokenized in the treebank. <<<Probe Architecture>>> As expounded upon above, our purpose is to achieve the best bound on mutual information we can. To this end, we employ a deep MLP as our probe. We define the probe as an $m$-layer neural network with the non-linearity $\sigma (\cdot ) = \mathrm {ReLU}(\cdot )$. The initial projection matrix is $W^{(1)} \in \mathbb {R}^{r_1 \times d}$ and the final projection matrix is $W^{(m)} \in \mathbb {R}^{|\mathcal {T}| \times r_{m-1}}$, where $r_i=\frac{r}{2^{i-1}}$. The remaining matrices are $W^{(i)} \in \mathbb {R}^{r_i \times r_{i-1}}$, so we half the number of hidden states in each layer. We optimize over the hyperparameters—number of layers, hidden size, one-hot embedding size, and dropout—by using random search. For each estimate, we train 50 models and choose the one with the best validation cross-entropy. The cross-entropy in the test set is then used as our entropy estimate. <<</Probe Architecture>>> <<<Results>>> We know $\textsc {bert} $ can generate text in many languages, here we assess how much does it actually know about syntax in those languages. And how much more does it know than simple type-level baselines. tab:results-full presents this results, showing how much information $\textsc {bert} $, fastText and onehot embeddings encode about POS tagging. We see that—in all analysed languages—type level embeddings can already capture most of the uncertainty in POS tagging. We also see that BERT only shares a small amount of extra information with the task, having small (or even negative) gains in all languages. $\textsc {bert} $ presents negative gains in some of the analysed languages. Although this may seem to contradict the information processing inequality, it is actually caused by the difficulty of approximating $\texttt {id} $ and $\mathbf {c}(\cdot )$ with a finite training set—causing $\mathrm {KL}_{q_{{\theta }1}}(T \mid R)$ to be larger than $\mathrm {KL}_{q_{{\theta }2}}(T \mid \mathbf {c}(R))$. We believe this highlights the need to formalize ease of extraction, as discussed in sec:ease-extract. Finally, when put into perspective, multilingual $\textsc {bert} $'s representations do not seem to encode much more information about syntax than a trivial baseline. $\textsc {bert} $ only improves upon fastText in three of the six analysed languages—and even in those, it encodes at most (in English) $5\%$ additional information. <<</Results>>> <<</Experiments>>> <<<Conclusion>>> We proposed an information-theoretic formulation of probing: we define probing as the task of estimating conditional mutual information. We introduce control functions, which allows us to put the amount of information encoded in contextual representations in the context of knowledge judged to be trivial. We further explored this formalization and showed that, given perfect probes, probing can only yield insights into the language itself and tells us nothing about the representations under investigation. Keeping this in mind, we suggested a change of focus—instead of focusing on probe size or information, we should look at ease of extraction going forward. On another note, we apply our formalization to evaluate multilingual $\textsc {bert} $'s syntax knowledge on a set of six typologically diverse languages. Although it does encode a large amount of information about syntax (more than $81\%$ in all languages), it only encodes at most $5\%$ more information than some trivial baseline knowledge (a type-level representation). This indicates that the task of POS labeling (word-level POS tagging) is not an ideal task for contemplating the syntactic understanding of contextual word embeddings. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nWord-Level Syntactic Probes for Contextual Embeddings\nNotation\nProbing as Mutual Information\nBounding Mutual Information\nBigger Probes are Better.\nControl Functions\nType-Level Control Functions\nHow Much Information Did We Gain?\nApproximating the Gain\nUnderstanding Probing Information-Theoretically\nYou Know Nothing, BERT\nWhat Do Control Functions Mean?\nDiscussion: Ease of Extraction\nA Critique of Control Tasks\nStructure and Randomness\nWhat's Wrong with Memorization?\nWhat Low-Selectivity Means\nExperiments\nProbe Architecture\nResults\nConclusion" ], "type": "outline" }
1908.08566
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Unsupervised Text Summarization via Mixed Model Back-Translation <<<Abstract>>> Back-translation based approaches have recently lead to significant progress in unsupervised sequence-to-sequence tasks such as machine translation or style transfer. In this work, we extend the paradigm to the problem of learning a sentence summarization system from unaligned data. We present several initial models which rely on the asymmetrical nature of the task to perform the first back-translation step, and demonstrate the value of combining the data created by these diverse initialization methods. Our system outperforms the current state-of-the-art for unsupervised sentence summarization from fully unaligned data by over 2 ROUGE, and matches the performance of recent semi-supervised approaches. <<</Abstract>>> <<<Introduction>>> Machine summarization systems have made significant progress in recent years, especially in the domain of news text. This has been made possible among other things by the popularization of the neural sequence-to-sequence (seq2seq) paradigm BIBREF0, BIBREF1, BIBREF2, the development of methods which combine the strengths of extractive and abstractive approaches to summarization BIBREF3, BIBREF4, and the availability of large training datasets for the task, such as Gigaword or the CNN-Daily Mail corpus which comprise of over 3.8M shorter and 300K longer articles and aligned summaries respectively. Unfortunately, the lack of datasets of similar scale for other text genres remains a limiting factor when attempting to take full advantage of these modeling advances using supervised training algorithms. In this work, we investigate the application of back-translation to training a summarization system in an unsupervised fashion from unaligned full text and summaries corpora. Back-translation has been successfully applied to unsupervised training for other sequence to sequence tasks such as machine translation BIBREF5 or style transfer BIBREF6. We outline the main differences between these settings and text summarization, devise initialization strategies which take advantage of the asymmetrical nature of the task, and demonstrate the advantage of combining varied initializers. Our approach outperforms the previous state-of-the-art on unsupervised text summarization while using less training data, and even matches the rouge scores of recent semi-supervised methods. <<</Introduction>>> <<<Related Work>>> BIBREF7's work on applying neural seq2seq systems to the task of text summarization has been followed by a number of works improving upon the initial model architecture. These have included changing the base encoder structure BIBREF8, adding a pointer mechanism to directly re-use input words in the summary BIBREF9, BIBREF3, or explicitly pre-selecting parts of the full text to focus on BIBREF4. While there have been comparatively few attempts to train these models with less supervision, auto-encoding based approaches have met some success BIBREF10, BIBREF11. BIBREF10's work endeavors to use summaries as a discrete latent variable for a text auto-encoder. They train a system on a combination of the classical log-likelihood loss of the supervised setting and a reconstruction objective which requires the full text to be mostly recoverable from the produced summary. While their method is able to take advantage of unlabelled data, it relies on a good initialization of the encoder part of the system which still needs to be learned on a significant number of aligned pairs. BIBREF11 expand upon this approach by replacing the need for supervised data with adversarial objectives which encourage the summaries to be structured like natural language, allowing them to train a system in a fully unsupervised setting from unaligned corpora of full text and summary sequences. Finally, BIBREF12 uses a general purpose pre-trained text encoder to learn a summarization system from fewer examples. Their proposed MASS scheme is shown to be more efficient than BERT BIBREF13 or Denoising Auto-Encoders (DAE) BIBREF14, BIBREF15. This work proposes a different approach to unsupervised training based on back-translation. The idea of using an initial weak system to create and iteratively refine artificial training data for a supervised algorithm has been successfully applied to semi-supervised BIBREF16 and unsupervised machine translation BIBREF5 as well as style transfer BIBREF6. We investigate how the same general paradigm may be applied to the task of summarizing text. <<</Related Work>>> <<<Mixed Model Back-Translation>>> Let us consider the task of transforming a sequence in domain $A$ into a corresponding sequence in domain $B$ (e.g. sentences in two languages for machine translation). Let $\mathcal {D}_A$ and $\mathcal {D}_B$ be corpora of sequences in $A$ and $B$, without any mapping between their respective elements. The back-translation approach starts with initial seq2seq models $f^0_{A \rightarrow B}$ and $f^0_{B \rightarrow A}$, which can be hand-crafted or learned without aligned pairs, and uses them to create artificial aligned training data: Let $\mathcal {S}$ denote a supervised learning algorithm, which takes a set of aligned sequence pairs and returns a mapping function. This artificial data can then be used to train the next iteration of seq2seq models, which in turn are used to create new artificial training sets ($A$ and $B$ can be switched here): The model is trained at each iteration on artificial inputs and real outputs, then used to create new training inputs. Thus, if the initial system isn't too far off, we can hope that training pairs get closer to the true data distribution with each step, allowing in turn to train better models. In the case of summarization, we consider the domains of full text sequences $\mathcal {D}^F$ and of summaries $\mathcal {D}^S$, and attempt to learn summarization ($f_{F\rightarrow S}$) and expansion ($f_{S\rightarrow F}$) functions. However, contrary to the translation case, $\mathcal {D}^F$ and $\mathcal {D}^S$ are not interchangeable. Considering that a summary typically has less information than the corresponding full text, we choose to only define initial ${F\rightarrow S}$ models. We can still follow the proposed procedure by alternating directions at each step. <<<Initialization Models for Summarization>>> To initiate their process for the case of machine translation, BIBREF5 use two different initialization models for their neural (NMT) and phrase-based (PBSMT) systems. The former relies on denoising auto-encoders in both languages with a shared latent space, while the latter uses the PBSMT system of BIBREF17 with a phrase table obtained through unsupervised vocabulary alignment as in BIBREF18. While both of these methods work well for machine translation, they rely on the input and output having similar lengths and information content. In particular, the statistical machine translation algorithm tries to align most input tokens to an output word. In the case of text summarization, however, there is an inherent asymmetry between the full text and the summaries, since the latter express only a subset of the former. Next, we propose three initialization systems which implicitly model this information loss. Full implementation details are provided in the Appendix. <<<Procrustes Thresholded Alignment (Pr-Thr)>>> The first initialization is similar to the one for PBSMT in that it relies on unsupervised vocabulary alignment. Specifically, we train two skipgram word embedding models using fasttext BIBREF19 on $\mathcal {D}^F$ and $\mathcal {D}^S$, then align them in a common space using the Wasserstein Procrustes method of BIBREF18. Then, we map each word of a full text sequence to its nearest neighbor in the aligned space if their distance is smaller than some threshold, or skip it otherwise. We also limit the output length, keeping only the first $N$ tokens. We refer to this function as $f_{F\rightarrow S}^{(\text{Pr-Thr}), 0}$. <<</Procrustes Thresholded Alignment (Pr-Thr)>>> <<<Denoising Bag-of-Word Auto-Encoder (DBAE)>>> Similarly to both BIBREF5 and BIBREF11, we also devise a starting model based on a DAE. One major difference is that we use a simple Bag-of-Words (BoW) encoder with fixed pre-trained word embeddings, and a 2-layer GRU decoder. Indeed, we find that a BoW auto-encoder trained on the summaries reaches a reconstruction rouge-l f-score of nearly 70% on the test set, indicating that word presence information is mostly sufficient to model the summaries. As for the noise model, for each token in the input, we remove it with probability $p/2$ and add a word drawn uniformly from the summary vocabulary with probability $p$. The BoW encoder has two advantages. First, it lacks the other models' bias to keep the word order of the full text in the summary. Secondly, when using the DBAE to predict summaries from the full text, we can weight the input word embeddings by their corpus-level probability of appearing in a summary, forcing the model to pay less attention to words that only appear in $\mathcal {D}^F$. The Denoising Bag-of-Words Auto-Encoder with input re-weighting is referred to as $f_{F\rightarrow S}^{(\text{DBAE}), 0}$. <<</Denoising Bag-of-Word Auto-Encoder (DBAE)>>> <<<First-Order Word Moments Matching (@!START@$\mathbf {\mu }$@!END@:1)>>> We also propose an extractive initialization model. Given the same BoW representation as for the DBAE, function $f_\theta ^\mu (s, v)$ predicts the probability that each word $v$ in a full text sequence $s$ is present in the summary. We learn the parameters of $f_\theta ^\mu $ by marginalizing the output probability of each word over all full text sequences, and matching these first-order moments to the marginal probability of each word's presence in a summary. That is, let $\mathcal {V}^S$ denote the vocabulary of $\mathcal {D}^S$, then $\forall v \in \mathcal {V}^S$: We minimize the binary cross-entropy (BCE) between the output and summary moments: We then define an initial extractive summarization model by applying $f_{\theta ^*}^\mu (\cdot , \cdot )$ to all words of an input sentence, and keeping the ones whose output probability is greater than some threshold. We refer to this model as $f_{F\rightarrow S}^{(\mathbf {\mu }:1), 0}$. <<</First-Order Word Moments Matching (@!START@$\mathbf {\mu }$@!END@:1)>>> <<</Initialization Models for Summarization>>> <<<Artificial Training Data>>> We apply the back-translation procedure outlined above in parallel for all three initialization models. For example, $f_{F\rightarrow S}^{(\mathbf {\mu }:1), 0}$ yields the following sequence of models and artificial aligned datasets: Finally, in order to take advantage of the various strengths of each of the initialization models, we also concatenate the artificial training dataset at each odd iteration to train a summarizer, e.g.: <<</Artificial Training Data>>> <<</Mixed Model Back-Translation>>> <<<Experiments>>> <<<Data and Model Choices>>> We validate our approach on the Gigaword corpus, which comprises of a training set of 3.8M article headlines (considered to be the full text) and titles (summaries), along with 200K validation pairs, and we report test performance on the same 2K set used in BIBREF7. Since we want to learn systems from fully unaligned data without giving the model an opportunity to learn an implicit mapping, we also further split the training set into 2M examples for which we only use titles, and 1.8M for headlines. All models after the initialization step are implemented as convolutional seq2seq architectures using Fairseq BIBREF20. Artificial data generation uses top-15 sampling, with a minimum length of 16 for full text and a maximum length of 12 for summaries. rouge scores are obtained with an output vocabulary of size 15K and a beam search of size 5 to match BIBREF11. <<</Data and Model Choices>>> <<<Initializers>>> Table TABREF9 compares test ROUGE for different initialization models, as well as the trivial Lead-8 baseline which simply copies the first 8 words of the article. We find that simply thresholding on distance during the word alignment step of (Pr-Thr) does slightly better then the full PBSMT system used by BIBREF5. Our BoW denoising auto-encoder with word re-weighting also performs significantly better than the full seq2seq DAE initialization used by BIBREF11 (Pre-DAE). The moments-based initial model ($\mathbf {\mu }$:1) scores higher than either of these, with scores already close to the full unsupervised system of BIBREF11. In order to investigate the effect of these three different strategies beyond their rouge statistics, we show generations of the three corresponding first iteration expanders for a given summary in Table TABREF1. The unsupervised vocabulary alignment in (Pr-Thr) handles vocabulary shift, especially changes in verb tenses (summaries tend to be in the present tense), but maintains the word order and adds very little information. Conversely, the ($\mathbf {\mu }$:1) expansion function, which is learned from purely extractive summaries, re-uses most words in the summary without any change and adds some new information. Finally, the auto-encoder based (DBAE) significantly increases the sequence length and variety, but also strays from the original meaning (more examples in the Appendix). The decoders also seem to learn facts about the world during their training on article text (EDF/GDF is France's public power company). <<</Initializers>>> <<<Full Models>>> Finally, Table TABREF13 compares the summarizers learned at various back-translation iterations to other unsupervised and semi-supervised approaches. Overall, our system outperforms the unsupervised Adversarial-reinforce of BIBREF11 after one back-translation loop, and most semi-supervised systems after the second one, including BIBREF12's MASS pre-trained sentence encoder and BIBREF10's Forced-attention Sentence Compression (FSC), which use 100K and 500K aligned pairs respectively. As far as back-translation approaches are concerned, we note that the model performances are correlated with the initializers' scores reported in Table TABREF9 (iterations 4 and 6 follow the same pattern). In addition, we find that combining data from all three initializers before training a summarizer system at each iteration as described in Section SECREF8 performs best, suggesting that the greater variety of artificial full text does help the model learn. <<</Full Models>>> <<<Conclusion>>> In this work, we use the back-translation paradigm for unsupervised training of a summarization system. We find that the model benefits from combining initializers, matching the performance of semi-supervised approaches. <<</Conclusion>>> <<</Experiments>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nRelated Work\nMixed Model Back-Translation\nInitialization Models for Summarization\nProcrustes Thresholded Alignment (Pr-Thr)\nDenoising Bag-of-Word Auto-Encoder (DBAE)\nFirst-Order Word Moments Matching (@!START@$\\mathbf {\\mu }$@!END@:1)\nArtificial Training Data\nExperiments\nData and Model Choices\nInitializers\nFull Models\nConclusion" ], "type": "outline" }
1912.00955
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Dynamic Prosody Generation for Speech Synthesis using Linguistics-Driven Acoustic Embedding Selection <<<Abstract>>> Recent advances in Text-to-Speech (TTS) have improved quality and naturalness to near-human capabilities when considering isolated sentences. But something which is still lacking in order to achieve human-like communication is the dynamic variations and adaptability of human speech. This work attempts to solve the problem of achieving a more dynamic and natural intonation in TTS systems, particularly for stylistic speech such as the newscaster speaking style. We propose a novel embedding selection approach which exploits linguistic information, leveraging the speech variability present in the training dataset. We analyze the contribution of both semantic and syntactic features. Our results show that the approach improves the prosody and naturalness for complex utterances as well as in Long Form Reading (LFR). <<</Abstract>>> <<<Introduction>>> Corresponding author email: [email protected]. Paper submitted to IEEE ICASSP 2020 Recent advances in TTS have improved the achievable synthetic speech naturalness to near human-like capabilities BIBREF0, BIBREF1, BIBREF2, BIBREF3. This means that for simple sentences, or for situations in which we can correctly predict the most appropriate prosodic representation, TTS systems are providing us with speech practically indistinguishable from that of humans. One aspect that most systems are still lacking is the natural variability of human speech, which is being observed as one of the reasons why the cognitive load of synthetic speech is higher than that of humans BIBREF4. This is something that variational models such as those based on Variational Auto-Encoding (VAE) BIBREF3, BIBREF5 attempt to solve by exploiting the sampling capabilities of the acoustic embedding space at inference time. Despite the advantages that VAE-based inference brings, it also suffers from the limitation that to synthesize a sample, one has to select an appropriate acoustic embedding for it, which can be challenging. A possible solution to this is to remove the selection process and consistently use a centroid to represent speech. This provides reliable acoustic representations but it suffers again from the monotonicity problem of conventional TTS. Another approach is to simply do a random sampling of the acoustic space. This would certainly solve the monotonicity problem if the acoustic embedding were varied enough. It can however, introduce erratic prosodic representations of longer texts, which can prove to be worse than being monotonous. Finally, one can consider text-based selection or prediction, as done in this research. In this work, we present a novel approach for informed embedding selection using linguistic features. The tight relationship between syntactic constituent structure and prosody is well known BIBREF6, BIBREF7. In the traditional Natural Language Processing (NLP) pipeline, constituency parsing produces full syntactic trees. More recent approaches based on Contextual Word Embedding (CWE) suggest that CWE are largely able to implicitly represent the classic NLP pipeline BIBREF8, while still retaining the ability to model lexical semantics BIBREF9. Thus, in this work we explore how TTS systems can enhance the quality of speech synthesis by using such linguistic features to guide the prosodic contour of generated speech. Similar relevant recent work exploring the advantages of exploiting syntactic information for TTS can be seen in BIBREF10, BIBREF11. While those studies, without any explicit acoustic pairing to the linguistic information, inject a number of curated features concatenated to the phonetic sequence as a way of informing the TTS system, the present study makes use of the linguistic information to drive the acoustic embedding selection rather than using it as an additional model features. An exploration of how to use linguistics as a way of predicting adequate acoustic embeddings can be seen in BIBREF12, where the authors explore the path of predicting an adequate embedding by informing the system with a set of linguistic and semantic information. The main difference of the present work is that in our case, rather than predicting a point in a high-dimensional space by making use of sparse input information (which is a challenging task and potentially vulnerable to training-domain dependencies), we use the linguistic information to predict the most similar embedding in our training set, reducing the complexity of the task significantly. The main contributions of this work are: i) we propose a novel approach of embedding selection in the acoustic space by using linguistic features; ii) we demonstrate that including syntactic information-driven acoustic embedding selection improves the overall speech quality, including its prosody; iii) we compare the improvements achieved by exploiting syntactic information in contrast with those brought by CWE; iv) we demonstrate that the approach improves the TTS quality in LFR experience as well. <<</Introduction>>> <<<Proposed Systems>>> CWE seem the obvious choice to drive embedding selection as they contain both syntactic and semantic information. However, a possible drawback of relying on CWE is that the linguistic-acoustic mapping space is sparse. The generalization capability of such systems in unseen scenarios will be poor BIBREF13. Also, as CWE models lexical semantics, it implies that two semantically similar sentences are likely to have similar CWE representations. This however does not necessarily correspond to a similarity in prosody, as the structure of the two sentences can be very different. We hypothesize that, in some scenarios, syntax will have better capability to generalize than semantics and that CWE have not been optimally exploited for driving prosody in speech synthesis. We explore these two hypotheses in our experiments. The objective of this work is to exploit sentence-level prosody variations available in the training dataset while synthesizing speech for the test sentence. The steps executed in this proposed approach are: (i) Generate suitable vector representations containing linguistic information for all the sentences in the train and test sets, (ii) Measure the similarity of the test sentence with each of the sentences in the train set. We do so by using cosine similarity between the vector representations as done in BIBREF14 to evaluate linguistic similarity, (iii) Choose the acoustic embedding of the train sentence which gives the highest similarity with the test sentence, (iv) Synthesize speech from VAE-based inference using this acoustic embedding <<<Systems>>> We experiment with three different systems for generating vector representations of the sentences, which allow us to explore the impact of both syntax and semantics on the overall quality of speech synthesis. The representations from the first system use syntactic information only, the second relies solely on CWE while the third uses a combination of CWE and explicit syntactic information. <<<Syntactic>>> Syntactic representations for sentences like constituency parse trees need to be transformed into vectors in order to be usable in neural TTS models. Some dimensions describing the tree can be transformed into word-based categorical feature like identity of parent and position of word in a phrase BIBREF15. The syntactic distance between adjacent words is known to be a prosodically relevant numerical source of information which is easily extracted from the constituency tree BIBREF16. It is explained by the fact that if many nodes must be traversed to find the first common ancestor, the syntactic distance between words is high. Large syntactic distances correlate with acoustically relevant events such as phrasing breaks or prosodic resets. To compute syntactic distance vector representations for sentences, we use the algorithm mentioned in BIBREF17. That is, for a sentence of n tokens, there are n corresponding distances which are concatenated together to give a vector of length n. The distance between the start of sentence and first token is always 0. We can see an example in Fig. 1: for the sentence “The brown fox is quick and it is jumping over the lazy dog", whose distance vector is d = [0 2 1 3 1 8 7 6 5 4 3 2 1]. The completion of the subject noun phrase (after `fox') triggers a prosodic reset, reflected in the distance of 3 between `fox' and `is'. There should also be a more emphasized reset at the end of the first clause, represented by the distance of 8 between `quick' and `and'. <<</Syntactic>>> <<<BERT>>> To generate CWE we use BERT BIBREF18, as it is one of the best performing pre-trained models with state of the art results on a large number of NLP tasks. BERT has also shown to generate strong representations for both syntax and semantics. We use the word representations from the uncased base (12 layer) model without fine-tuning. The sentence level representations are achieved by averaging the second to last hidden layer for each token in the sentence. These embeddings are used to drive acoustic embedding selection. <<</BERT>>> <<<BERT Syntactic>>> Even though BERT embeddings capture some aspects of syntactic information along with semantics, we decided to experiment with a system combining the information captured by both of the above mentioned systems. The information from syntactic distances and BERT embeddings cannot be combined at token level to give a single vector representation since both these systems use different tokenization algorithms. Tokenization in BERT is based on the wordpiece algorithm BIBREF19 as a way to eliminate the out-of-vocabulary issues. On the other hand, tokenization used to generate parse trees is based on morphological considerations rooted in linguistic theory. At inference time, we average the similarity scores obtained by comparing the BERT embeddings and the syntactic distance vectors. <<</BERT Syntactic>>> <<</Systems>>> <<<Applications to LFR>>> The approaches described in Section SECREF1 produce utterances with more varied prosody as compared to the long-term monotonicity of those obtained via centroid-based VAE inference. However, when considering multi-sentence texts, we have to be mindful of the issues that can be introduced by erratic transitions. We tackle this issue by minimizing the acoustic variation a sentence can have with respect to the previous one, while still minimizing the linguistic distance. We consider the Euclidean distance between the 2D Principal Component Analysis (PCA) projected acoustic embeddings as a measure of acoustic variation, as we observe that the projected space provides us with an acoustically relevant space in which distances can be easily obtained. Doing the same in the 64-dimensional VAE space did not perform as intended, likely because of the non-linear manifold representing our system, in which distances are not linear. As a result, certain sentence may be linguistically the closest match in terms of syntactic distance or CWE, but it will still not be selected if its acoustic embedding is far apart from that of the previous sentence. We modify the similarity evaluation metric used for choosing the closest match from the train set by adding a weighted cost to account for acoustic variation. This approach focuses only on the sentence transitions within a paragraph rather than optimizing the entire acoustic embedding path. This is done as follows: (i) Define the weights for linguistic similarity and acoustic similarity. In this work, the two weights sum up to 1; (ii) The objective is to minimize the following loss considering the acoustic embedding chosen for the previous sentence in the paragraph: Loss = LSW * (1-LS) + (1-LSW) * D, where LSW = Linguistic Similarity Weight; LS = Linguistic Similarity between test and train sentence; D = Euclidean distance between the acoustic embedding of the train sentence and the acoustic embedding chosen for the previous sentence. We fix D=0 for the first sentence of every paragraph. Thus, this approach is more suitable for cases when the first sentence is generally the carrier sentence, i.e. one which uses a structural template. This is particularly the case for news stories such as the ones considered in this research. Distances observed between the chosen acoustic embeddings for a sample paragraph and the effect of varying weights are depicted in the matrices in Fig FIGREF7. They are symmetric matrices, where each row and column of the matrix represents the sentence at index i in a paragraph. Each cell represents the Euclidean distance between the acoustic embeddings chosen for sentences at index i,j. We can see that in (a) the sentence at index 4 stands out as the most acoustically dissimilar sentence from the rest of the sentences in the paragraph. We see that the overall acoustic distance between sentences in much higher in (a) than in (b). As we are particularly concerned with transitions from previous to current sentence, we focus on cells (i,i-1) for each row. In (a), sentences at index 4 and 5 particularly stand out as potential erratic transitions due to high values in cell (4,3) and (5,4). In (b) we observe that the distances have significantly reduced and thus sentence transitions are expected to be smooth. As LSW decreases, the transitions become smoother. This is not `free': there is a trade-off, as increasing the transition smoothness decreases the linguistic similarity which also reduces the prosodic divergence. Fig. FIGREF10 shows the trade-off between the two, across the test set, when using syntactic distance to evaluate LS. Low linguistic distance (i.e. 1 - LS) and low acoustic distance are required. The plot shows that there is a sharp decrease in acoustic distance between LSW of 1.0 and 0.9 but the reduction becomes slower from therein, while the changes in linguistic distance progress in a linear fashion. We informally evaluated the performance of the systems by reducing LSW from 1.0 till 0.7 with a step size of 0.05 in order to look for an optimal balance. At LSW=0.9, the first elbow on acoustic distance curve, there was a significant decrease in the perceived erraticness. As such, we chose those values for our LFR evaluations. <<</Applications to LFR>>> <<</Proposed Systems>>> <<<Experimental Protocol>>> The research questions we attempt to answer are: Can linguistics-driven selection of acoustic waveform from the existing dataset lead to improved prosody and naturalness when synthesizing speech ? How does syntactic selection compare with CWE selection? Does this approach improve LFR experience as well? To answer these questions, we used in our experiments the systems, data and subjective evaluations described below. <<<Text-to-Speech System>>> The evaluated TTS system is a Tacotron-like system BIBREF20 already verified for the newscaster domain. A schematic description can be seen in Fig. FIGREF15 and a detailed explanation of the baseline system and the training data can be read in BIBREF21, BIBREF22. Conversion of the produced spectrograms to waveforms is done using the Universal WaveRNN-like model presented in BIBREF2. For this study, we consider an improved system that replaced the one-hot vector style modeling approach by a VAE-based reference encoder similar to BIBREF5, BIBREF3, in which the VAE embedding represents an acoustic encoding of a speech signal, allowing us to drive the prosodic representation of the synthesized text as observed in BIBREF23. The way of selecting the embedding at inference time is defined by the approaches introduced in Sections SECREF1 and SECREF6. The dimension of the embedding is set to 64 as it allows for the best convergence without collapsing the KLD loss during training. <<</Text-to-Speech System>>> <<<Datasets>>> <<<Training Dataset>>> (i) TTS System dataset: We trained our TTS system with a mixture of neutral and newscaster style speech. For a total of 24 hours of training data, split in 20 hours of neutral (22000 utterances) and 4 hours of newscaster styled speech (3000 utterances). (ii) Embedding selection dataset: As the evaluation was carried out only on the newscaster speaking style, we restrict our linguistic search space to the utterances associated to the newscaster style: 3000 sentences. <<</Training Dataset>>> <<<Evaluation Dataset>>> The systems were evaluated on two datasets: (i) Common Prosody Errors (CPE): The dataset on which the baseline Prostron model fails to generate appropriate prosody. This dataset consists of complex utterances like compound nouns (22%), “or" questions (9%), “wh" questions (18%). This set is further enhanced by sourcing complex utterances (51%) from BIBREF24. (ii) LFR: As demonstrated in BIBREF25, evaluating sentences in isolation does not suffice if we want to evaluate the quality of long-form speech. Thus, for evaluations on LFR we curated a dataset of news samples. The news style sentences were concatenated into full news stories, to capture the overall experience of our intended use case. <<</Evaluation Dataset>>> <<</Datasets>>> <<<Subjective evaluation>>> Our tests are based on MUltiple Stimuli with Hidden Reference and Anchor (MUSHRA) BIBREF26, but without forcing a system to be rated as 100, and not always considering a top anchor. All of our listeners, regardless of linguistic knowledge were native US English speakers. For the CPE dataset, we carried out two tests. The first one with 10 linguistic experts as listeners, who were asked to rate the appropriateness of the prosody ignoring the speaking style on a scale from 0 (very inappropriate) to 100 (very appropriate). The second test was carried out on 10 crowd-sourced listeners who evaluated the naturalness of the speech from 0 to 100. In both tests each listener was asked to rate 28 different screens, with 4 randomly ordered samples per screen for a total of 112 samples. The 4 systems were the 3 proposed ones and the centroid-based VAE inference as the baseline. For the LFR dataset, we conducted only a crowd-sourced evaluation of naturalness, where the listeners were asked to assess the suitability of newscaster style on a scale from 0 (completely unsuitable) to 100 (completely adequate). Each listener was presented with 51 news stories, each playing one of the 5 systems including the original recordings as a top anchor, the centroid-based VAE as baseline and the 3 proposed linguistics-driven embedding selection systems. <<</Subjective evaluation>>> <<</Experimental Protocol>>> <<<Results>>> Table 1 reports the average MUSHRA scores, evaluating prosody and naturalness, for each of the test systems on the CPE dataset. These results answer Q1, as the proposed approach improves significantly over the baseline on both grounds. It thus, gives us evidence supporting our hypothesis that linguistics-driven acoustic embedding selection can significantly improve speech quality. We also observe that better prosody does not directly translate into improved naturalness and that there is a need to improve acoustic modeling in order to better reflect the prosodic improvements achieved. We validate the differences between MUSHRA scores using pairwise t-test. All proposed systems improved significantly over the baseline prosody (p$<$0.01). For naturalness, BERT syntactic performed the best, improving over the baseline significantly (p=0.04). Other systems did not give statistically significant improvement over the baseline (p$>$0.05). The difference between BERT and BERT Syntactic is also statistically insignificant. Q2 is explored in Table TABREF21, which gives the breakdown of prosody results by major categories in CPE. For `wh' questions, we observe that Syntactic alone brings an improvement of 4% and BERT Syntactic performs the best by improving 8% over the baseline. This suggests that `wh' questions generally share a closely related syntax structure and that information can be used to achieve better prosody. This intuition is further strengthened by the improvements observed for `or' questions. Syntactic alone improves by 9% over the baseline and BERT Syntactic performs the best by improving 21% over the baseline. The improvement observed in `or' questions is greater than `wh' questions as most `or' questions have a syntax structure unique to them and this is consistent across samples in the category. For both these categories, the systems Syntactic, BERT and BERT Syntactic show incremental improvement as the first system contains only syntactic information, the next captures some aspect of syntax with semantics and the third has enhanced the representation of syntax with CWE representation to drive selection. Thus, it is evident that the extent of syntactic information captured drives the quality in speech synthesis for these two categories. Compound nouns proved harder to improve upon as compared to questions. BERT performed the best in this category with a 1.2% improvement over the baseline. We can attribute this to the capability of BERT to capture context which Syntactic does not do. This plays a critical role in compound nouns, where to achieve suitable prosody it is imperative to understand in which context the nouns are being used. For other complex sentences as well, BERT performed the best by improving over the baseline by 6%. This can again be attributed to the fact that most of the complex sentences required contextual knowledge. Although Syntactic does improve over the baseline, syntax does not look like the driving factor as BERT Syntactic performs a bit worse than BERT. This indicates that enhancing syntax representation hinders BERT from fully leveraging the contextual knowledge it captured to drive embedding selection. Q3 is answered in Table TABREF22, which reports the MUSHRA scores on the LFR dataset. The Syntactic system performed the best with high statistical significance (p=0.02) in comparison to baseline. We close the gap between the baseline and the recordings by almost 20%. Other systems show statistically insignificant (p$>$0.05) improvements over the baseline. To achieve suitable prosody, LFR requires longer distance dependencies and knowledge of prosodic groups. Such information can be approximated more effectively by the Syntactic system rather than the CWE based systems. However, this is a topic for a potential future exploration as the difference between BERT and Syntactic is statistically insignificant (p=0.6). <<</Results>>> <<<Conclusion>>> The current VAE-based TTS systems are susceptible to monotonous speech generation due to the need to select a suitable acoustic embedding to synthesize a sample. In this work, we proposed to generate dynamic prosody from the same TTS systems by using linguistics to drive acoustic embedding selection. Our proposed approach is able to improve the overall speech quality including prosody and naturalness. We propose 3 techniques (Syntactic, BERT and BERT Syntactic) and evaluated their performance on 2 datasets: common prosodic errors and LFR. The Syntactic system was able to improve significantly over the baseline on almost all parameters (except for naturalness on CPE). Information captured by BERT further improved prosody in cases where contextual knowledge was required. For LFR, we bridged the gap between baseline and actual recording by 20%. This approach can be further extended by making the model aware of these features rather than using them to drive embedding selection. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nProposed Systems\nSystems\nSyntactic\nBERT\nBERT Syntactic\nApplications to LFR\nExperimental Protocol\nText-to-Speech System\nDatasets\nTraining Dataset\nEvaluation Dataset\nSubjective evaluation\nResults\nConclusion" ], "type": "outline" }
1909.08752
Please extract the outline of the given paper. You just need to output the section names (without details in sections' content) in the correct order without any additional explanation, like "Abstract Introduction Related Work Method <Subsection1 of Method> <Subsection2 of Method> Experiments Conclusion". Context: <<<Title>>> Summary Level Training of Sentence Rewriting for Abstractive Summarization <<<Abstract>>> As an attempt to combine extractive and abstractive summarization, Sentence Rewriting models adopt the strategy of extracting salient sentences from a document first and then paraphrasing the selected ones to generate a summary. However, the existing models in this framework mostly rely on sentence-level rewards or suboptimal labels, causing a mismatch between a training objective and evaluation metric. In this paper, we present a novel training signal that directly maximizes summary-level ROUGE scores through reinforcement learning. In addition, we incorporate BERT into our model, making good use of its ability on natural language understanding. In extensive experiments, we show that a combination of our proposed model and training procedure obtains new state-of-the-art performance on both CNN/Daily Mail and New York Times datasets. We also demonstrate that it generalizes better on DUC-2002 test set. <<</Abstract>>> <<<Introduction>>> The task of automatic text summarization aims to compress a textual document to a shorter highlight while keeping salient information of the original text. In general, there are two ways to do text summarization: Extractive and Abstractive BIBREF0. Extractive approaches generate summaries by selecting salient sentences or phrases from a source text, while abstractive approaches involve a process of paraphrasing or generating sentences to write a summary. Recent work BIBREF1, BIBREF2 demonstrates that it is highly beneficial for extractive summarization models to incorporate pre-trained language models (LMs) such as BERT BIBREF3 into their architectures. However, the performance improvement from the pre-trained LMs is known to be relatively small in case of abstractive summarization BIBREF4, BIBREF5. This discrepancy may be due to the difference between extractive and abstractive approaches in ways of dealing with the task—the former classifies whether each sentence to be included in a summary, while the latter generates a whole summary from scratch. In other words, as most of the pre-trained LMs are designed to be of help to the tasks which can be categorized as classification including extractive summarization, they are not guaranteed to be advantageous to abstractive summarization models that should be capable of generating language BIBREF6, BIBREF7. On the other hand, recent studies for abstractive summarization BIBREF8, BIBREF9, BIBREF10 have attempted to exploit extractive models. Among these, a notable one is BIBREF8, in which a sophisticated model called Reinforce-Selected Sentence Rewriting is proposed. The model consists of both an extractor and abstractor, where the extractor picks out salient sentences first from a source article, and then the abstractor rewrites and compresses the extracted sentences into a complete summary. It is further fine-tuned by training the extractor with the rewards derived from sentence-level ROUGE scores of the summary generated from the abstractor. In this paper, we improve the model of BIBREF8, addressing two primary issues. Firstly, we argue there is a bottleneck in the existing extractor on the basis of the observation that its performance as an independent summarization model (i.e., without the abstractor) is no better than solid baselines such as selecting the first 3 sentences. To resolve the problem, we present a novel neural extractor exploiting the pre-trained LMs (BERT in this work) which are expected to perform better according to the recent studies BIBREF1, BIBREF2. Since the extractor is a sort of sentence classifier, we expect that it can make good use of the ability of pre-trained LMs which is proven to be effective in classification. Secondly, the other point is that there is a mismatch between the training objective and evaluation metric; the previous work utilizes the sentence-level ROUGE scores as a reinforcement learning objective, while the final performance of a summarization model is evaluated by the summary-level ROUGE scores. Moreover, as BIBREF11 pointed out, sentences with the highest individual ROUGE scores do not necessarily lead to an optimal summary, since they may contain overlapping contents, causing verbose and redundant summaries. Therefore, we propose to directly use the summary-level ROUGE scores as an objective instead of the sentence-level scores. A potential problem arising from this apprsoach is the sparsity of training signals, because the summary-level ROUGE scores are calculated only once for each training episode. To alleviate this problem, we use reward shaping BIBREF12 to give an intermediate signal for each action, preserving the optimal policy. We empirically demonstrate the superiority of our approach by achieving new state-of-the-art abstractive summarization results on CNN/Daily Mail and New York Times datasets BIBREF13, BIBREF14. It is worth noting that our approach shows large improvements especially on ROUGE-L score which is considered a means of assessing fluency BIBREF11. In addition, our model performs much better than previous work when testing on DUC-2002 dataset, showing better generalization and robustness of our model. Our contributions in this work are three-fold: a novel successful application of pre-trained transformers for abstractive summarization; suggesting a training method to globally optimize sentence selection; achieving the state-of-the-art results on the benchmark datasets, CNN/Daily Mail and New York Times. <<</Introduction>>> <<<Background>>> <<<Sentence Rewriting>>> In this paper, we focus on single-document multi-sentence summarization and propose a neural abstractive model based on the Sentence Rewriting framework BIBREF8, BIBREF15 which consists of two parts: a neural network for the extractor and another network for the abstractor. The extractor network is designed to extract salient sentences from a source article. The abstractor network rewrites the extracted sentences into a short summary. <<</Sentence Rewriting>>> <<<Learning Sentence Selection>>> The most common way to train extractor to select informative sentences is building extractive oracles as gold targets, and training with cross-entropy (CE) loss. An oracle consists of a set of sentences with the highest possible ROUGE scores. Building oracles is finding an optimal combination of sentences, where there are $2^n$ possible combinations for each example. Because of this, the exact optimization for ROUGE scores is intractable. Therefore, alternative methods identify the set of sentences with greedy search BIBREF16, sentence-level search BIBREF9, BIBREF17 or collective search using the limited number of sentences BIBREF15, which construct suboptimal oracles. Even if all the optimal oracles are found, training with CE loss using these labels will cause underfitting as it will only maximize probabilities for sentences in label sets and ignore all other sentences. Alternatively, reinforcement learning (RL) can give room for exploration in the search space. BIBREF8, our baseline work, proposed to apply policy gradient methods to train an extractor. This approach makes an end-to-end trainable stochastic computation graph, encouraging the model to select sentences with high ROUGE scores. However, they define a reward for an action (sentence selection) as a sentence-level ROUGE score between the chosen sentence and a sentence in the ground truth summary for that time step. This leads the extractor agent to a suboptimal policy; the set of sentences matching individually with each sentence in a ground truth summary isn't necessarily optimal in terms of summary-level ROUGE score. BIBREF11 proposed policy gradient with rewards from summary-level ROUGE. They defined an action as sampling a summary from candidate summaries that contain the limited number of plausible sentences. After training, a sentence is ranked high for selection if it often occurs in high scoring summaries. However, their approach still has a risk of ranking redundant sentences high; if two highly overlapped sentences have salient information, they would be ranked high together, increasing the probability of being sampled in one summary. To tackle this problem, we propose a training method using reinforcement learning which globally optimizes summary-level ROUGE score and gives intermediate rewards to ease the learning. <<</Learning Sentence Selection>>> <<<Pre-trained Transformers>>> Transferring representations from pre-trained transformer language models has been highly successful in the domain of natural language understanding tasks BIBREF18, BIBREF3, BIBREF19, BIBREF20. These methods first pre-train highly stacked transformer blocks BIBREF21 on a huge unlabeled corpus, and then fine-tune the models or representations on downstream tasks. <<</Pre-trained Transformers>>> <<</Background>>> <<<Model>>> Our model consists of two neural network modules, i.e. an extractor and abstractor. The extractor encodes a source document and chooses sentences from the document, and then the abstractor paraphrases the summary candidates. Formally, a single document consists of $n$ sentences $D=\lbrace s_1,s_2,\cdots ,s_n\rbrace $. We denote $i$-th sentence as $s_i=\lbrace w_{i1},w_{i2},\cdots ,w_{im}\rbrace $ where $w_{ij}$ is the $j$-th word in $s_i$. The extractor learns to pick out a subset of $D$ denoted as $\hat{D}=\lbrace \hat{s}_1,\hat{s}_2,\cdots ,\hat{s}_k|\hat{s}_i\in D\rbrace $ where $k$ sentences are selected. The abstractor rewrites each of the selected sentences to form a summary $S=\lbrace f(\hat{s}_1),f(\hat{s}_2),\cdots ,f(\hat{s}_k)\rbrace $, where $f$ is an abstracting function. And a gold summary consists of $l$ sentences $A=\lbrace a_1,a_2,\cdots ,a_l\rbrace $. <<<Extractor Network>>> The extractor is based on the encoder-decoder framework. We adapt BERT for the encoder to exploit contextualized representations from pre-trained transformers. BERT as the encoder maps the input sequence $D$ to sentence representation vectors $H=\lbrace h_1,h_2,\cdots ,h_n\rbrace $, where $h_i$ is for the $i$-th sentence in the document. Then, the decoder utilizes $H$ to extract $\hat{D}$ from $D$. <<<Leveraging Pre-trained Transformers>>> Although we require the encoder to output the representation for each sentence, the output vectors from BERT are grounded to tokens instead of sentences. Therefore, we modify the input sequence and embeddings of BERT as BIBREF1 did. In the original BERT's configure, a [CLS] token is used to get features from one sentence or a pair of sentences. Since we need a symbol for each sentence representation, we insert the [CLS] token before each sentence. And we add a [SEP] token at the end of each sentence, which is used to differentiate multiple sentences. As a result, the vector for the $i$-th [CLS] symbol from the top BERT layer corresponds to the $i$-th sentence representation $h_i$. In addition, we add interval segment embeddings as input for BERT to distinguish multiple sentences within a document. For $s_i$ we assign a segment embedding $E_A$ or $E_B$ conditioned on $i$ is odd or even. For example, for a consecutive sequence of sentences $s_1, s_2, s_3, s_4, s_5$, we assign $E_A, E_B, E_A, E_B, E_A$ in order. All the words in each sentence are assigned to the same segment embedding, i.e. segment embeddings for $w_{11}, w_{12},\cdots ,w_{1m}$ is $E_A,E_A,\cdots ,E_A$. An illustration for this procedure is shown in Figure FIGREF1. <<</Leveraging Pre-trained Transformers>>> <<<Sentence Selection>>> We use LSTM Pointer Network BIBREF22 as the decoder to select the extracted sentences based on the above sentence representations. The decoder extracts sentences recurrently, producing a distribution over all of the remaining sentence representations excluding those already selected. Since we use the sequential model which selects one sentence at a time step, our decoder can consider the previously selected sentences. This property is needed to avoid selecting sentences that have overlapping information with the sentences extracted already. As the decoder structure is almost the same with the previous work, we convey the equations of BIBREF8 to avoid confusion, with minor modifications to agree with our notations. Formally, the extraction probability is calculated as: where $e_t$ is the output of the glimpse operation: In Equation DISPLAY_FORM9, $z_t$ is the hidden state of the LSTM decoder at time $t$ (shown in green in Figure FIGREF1). All the $W$ and $v$ are trainable parameters. <<</Sentence Selection>>> <<</Extractor Network>>> <<<Abstractor Network>>> The abstractor network approximates $f$, which compresses and paraphrases an extracted document sentence to a concise summary sentence. We use the standard attention based sequence-to-sequence (seq2seq) model BIBREF23, BIBREF24 with the copying mechanism BIBREF25 for handling out-of-vocabulary (OOV) words. Our abstractor is practically identical to the one proposed in BIBREF8. <<</Abstractor Network>>> <<</Model>>> <<<Training>>> In our model, an extractor selects a series of sentences, and then an abstractor paraphrases them. As they work in different ways, we need different training strategies suitable for each of them. Training the abstractor is relatively obvious; maximizing log-likelihood for the next word given the previous ground truth words. However, there are several issues for extractor training. First, the extractor should consider the abstractor's rewriting process when it selects sentences. This causes a weak supervision problem BIBREF26, since the extractor gets training signals indirectly after paraphrasing processes are finished. In addition, thus this procedure contains sampling or maximum selection, the extractor performs a non-differentiable extraction. Lastly, although our goal is maximizing ROUGE scores, neural models cannot be trained directly by maximum likelihood estimation from them. To address those issues above, we apply standard policy gradient methods, and we propose a novel training procedure for extractor which guides to the optimal policy in terms of the summary-level ROUGE. As usual in RL for sequence prediction, we pre-train submodules and apply RL to fine-tune the extractor. <<<Training Submodules>>> <<<Extractor Pre-training>>> Starting from a poor random policy makes it difficult to train the extractor agent to converge towards the optimal policy. Thus, we pre-train the network using cross entropy (CE) loss like previous work BIBREF27, BIBREF8. However, there is no gold label for extractive summarization in most of the summarization datasets. Hence, we employ a greedy approach BIBREF16 to make the extractive oracles, where we add one sentence at a time incrementally to the summary, such that the ROUGE score of the current set of selected sentences is maximized for the entire ground truth summary. This doesn't guarantee optimal, but it is enough to teach the network to select plausible sentences. Formally, the network is trained to minimize the cross-entropy loss as follows: where $s^*_t$ is the $t$-th generated oracle sentence. <<</Extractor Pre-training>>> <<<Abstractor Training>>> For the abstractor training, we should create training pairs for input and target sentences. As the abstractor paraphrases on sentence-level, we take a sentence-level search for each ground-truth summary sentence. We find the most similar document sentence $s^{\prime }_t$ by: And then the abstractor is trained as a usual sequence-to-sequence model to minimize the cross-entropy loss: where $w^a_j$ is the $j$-th word of the target sentence $a_t$, and $\Phi $ is the encoded representation for $s^{\prime }_t$. <<</Abstractor Training>>> <<</Training Submodules>>> <<<Guiding to the Optimal Policy>>> To optimize ROUGE metric directly, we assume the extractor as an agent in reinforcement learning paradigm BIBREF28. We view the extractor has a stochastic policy that generates actions (sentence selection) and receives the score of final evaluation metric (summary-level ROUGE in our case) as the return While we are ultimately interested in the maximization of the score of a complete summary, simply awarding this score at the last step provides a very sparse training signal. For this reason we define intermediate rewards using reward shaping BIBREF12, which is inspired by BIBREF27's attempt for sequence prediction. Namely, we compute summary-level score values for all intermediate summaries: The reward for each step $r_t$ is the difference between the consecutive pairs of scores: This measures an amount of increase or decrease in the summary-level score from selecting $\hat{s}_t$. Using the shaped reward $r_t$ instead of awarding the whole score $R$ at the last step does not change the optimal policy BIBREF12. We define a discounted future reward for each step as $R_t=\sum _{t=1}^{k}\gamma ^tr_{t+1}$, where $\gamma $ is a discount factor. Additionally, we add `stop' action to the action space, by concatenating trainable parameters $h_{\text{stop}}$ (the same dimension as $h_i$) to $H$. The agent treats it as another candidate to extract. When it selects `stop', an extracting episode ends and the final return is given. This encourages the model to extract additional sentences only when they are expected to increase the final return. Following BIBREF8, we use the Advantage Actor Critic BIBREF29 method to train. We add a critic network to estimate a value function $V_t(D,\hat{s}_1,\cdots ,\hat{s}_{t-1})$, which then is used to compute advantage of each action (we will omit the current state $(D,\hat{s}_1,\cdots ,\hat{s}_{t-1})$ to simplify): where $Q_t(s_i)$ is the expected future reward for selecting $s_i$ at the current step $t$. We maximize this advantage with the policy gradient with the Monte-Carlo sample ($A_t(s_i) \approx R_t - V_t$): where $\theta _\pi $ is the trainable parameters of the actor network (original extractor). And the critic is trained to minimize the square loss: where $\theta _\psi $ is the trainable parameters of the critic network. <<</Guiding to the Optimal Policy>>> <<</Training>>> <<<Experimental Setup>>> <<<Datasets>>> We evaluate the proposed approach on the CNN/Daily Mail BIBREF13 and New York Times BIBREF30 dataset, which are both standard corpora for multi-sentence abstractive summarization. Additionally, we test generalization of our model on DUC-2002 test set. CNN/Daily Mail dataset consists of more than 300K news articles and each of them is paired with several highlights. We used the standard splits of BIBREF13 for training, validation and testing (90,226/1,220/1,093 documents for CNN and 196,961/12,148/10,397 for Daily Mail). We did not anonymize entities. We followed the preprocessing methods in BIBREF25 after splitting sentences by Stanford CoreNLP BIBREF31. The New York Times dataset also consists of many news articles. We followed the dataset splits of BIBREF14; 100,834 for training and 9,706 for test examples. And we also followed the filtering procedure of them, removing documents with summaries that are shorter than 50 words. The final test set (NYT50) contains 3,452 examples out of the original 9,706. The DUC-2002 dataset contains 567 document-summary pairs for single-document summarization. As a single document can have multiple summaries, we made one pair per summary. We used this dataset as a test set for our model trained on CNN/Daily Mail dataset to test generalization. <<</Datasets>>> <<<Implementation Details>>> Our extractor is built on $\text{BERT}_\text{BASE}$ with fine-tuning, smaller version than $\text{BERT}_\text{LARGE}$ due to limitation of time and space. We set LSTM hidden size as 256 for all of our models. To initialize word embeddings for our abstractor, we use word2vec BIBREF32 of 128 dimensions trained on the same corpus. We optimize our model with Adam optimizer BIBREF33 with $\beta _1=0.9$ and $\beta _2=0.999$. For extractor pre-training, we use learning rate schedule following BIBREF21 with $warmup=10000$: And we set learning rate $1e^{-3}$ for abstractor and $4e^{-6}$ for RL training. We apply gradient clipping using L2 norm with threshold $2.0$. For RL training, we use $\gamma =0.95$ for the discount factor. To ease learning $h_{\text{stop}}$, we set the reward for the stop action to $\lambda \cdot \text{ROUGE-L}^{\text{summ}}_{F_1}(S, A)$, where $\lambda $ is a stop coefficient set to $0.08$. Our critic network shares the encoder with the actor (extractor) and has the same architecture with it except the output layer, estimating scalar for the state value. And the critic is initialized with the parameters of the pre-trained extractor where it has the same architecture. <<</Implementation Details>>> <<<Evaluation>>> We evaluate the performance of our method using different variants of ROUGE metric computed with respect to the gold summaries. On the CNN/Daily Mail and DUC-2002 dataset, we use standard ROUGE-1, ROUGE-2, and ROUGE-L BIBREF34 on full length $F_1$ with stemming as previous work did BIBREF16, BIBREF25, BIBREF8. On NYT50 dataset, following BIBREF14 and BIBREF35, we used the limited length ROUGE recall metric, truncating the generated summary to the length of the ground truth summary. <<</Evaluation>>> <<</Experimental Setup>>> <<<Results>>> <<<CNN/Daily Mail>>> Table TABREF24 shows the experimental results on CNN/Daily Mail dataset, with extractive models in the top block and abstractive models in the bottom block. For comparison, we list the performance of many recent approaches with ours. <<<Extractive Summarization>>> As BIBREF25 showed, the first 3 sentences (lead-3) in an article form a strong summarization baseline in CNN/Daily Mail dataset. Therefore, the very first objective of extractive models is to outperform the simple method which always returns 3 or 4 sentences at the top. However, as Table TABREF27 shows, ROUGE scores of lead baselines and extractors from previous work in Sentence Rewrite framework BIBREF8, BIBREF15 are almost tie. We can easily conjecture that the limited performances of their full model are due to their extractor networks. Our extractor network with BERT (BERT-ext), as a single model, outperforms those models with large margins. Adding reinforcement learning (BERT-ext + RL) gives higher performance, which is competitive with other extractive approaches using pre-trained Transformers (see Table TABREF24). This shows the effectiveness of our learning method. <<</Extractive Summarization>>> <<<Abstractive Summarization>>> Our abstractive approaches combine the extractor with the abstractor. The combined model (BERT-ext + abs) without additional RL training outperforms the Sentence Rewrite model BIBREF8 without reranking, showing the effectiveness of our extractor network. With the proposed RL training procedure (BERT-ext + abs + RL), our model exceeds the best model of BIBREF8. In addition, the result is better than those of all the other abstractive methods exploiting extractive approaches in them BIBREF9, BIBREF8, BIBREF10. <<</Abstractive Summarization>>> <<<Redundancy Control>>> Although the proposed RL training inherently gives training signals that induce the model to avoid redundancy across sentences, there can be still remaining overlaps between extracted sentences. We found that the additional methods reducing redundancies can improve the summarization quality, especially on CNN/Daily Mail dataset. We tried Trigram Blocking BIBREF1 for extractor and Reranking BIBREF8 for abstractor, and we empirically found that the reranking only improves the performance. This helps the model to compress the extracted sentences focusing on disjoint information, even if there are some partial overlaps between the sentences. Our best abstractive model (BERT-ext + abs + RL + rerank) achieves the new state-of-the-art performance for abstractive summarization in terms of average ROUGE score, with large margins on ROUGE-L. However, we empirically found that the reranking method has no effect or has negative effect on NYT50 or DUC-2002 dataset. Hence, we don't apply it for the remaining datasets. <<</Redundancy Control>>> <<<Combinatorial Reward>>> Before seeing the effects of our summary-level rewards on final results, we check the upper bounds of different training signals for the full model. All the document sentences are paraphrased with our trained abstractor, and then we find the best set for each search method. Sentence-matching finds sentences with the highest ROUGE-L score for each sentence in the gold summary. This search method matches with the best reward from BIBREF8. Greedy Search is the same method explained for extractor pre-training in section SECREF11. Combination Search selects a set of sentences which has the highest summary-level ROUGE-L score, from all the possible combinations of sentences. Due to time constraints, we limited the maximum number of sentences to 5. This method corresponds to our final return in RL training. Table TABREF31 shows the summary-level ROUGE scores of previously explained methods. We see considerable gaps between Sentence-matching and Greedy Search, while the scores of Greedy Search are close to those of Combination Search. Note that since we limited the number of sentences for Combination Search, the exact scores for it would be higher. The scores can be interpreted to be upper bounds for corresponding training methods. This result supports our training strategy; pre-training with Greedy Search and final optimization with the combinatorial return. Additionally, we experiment to verify the contribution of our training method. We train the same model with different training signals; Sentence-level reward from BIBREF8 and combinatorial reward from ours. The results are shown in Table TABREF34. Both with and without reranking, the models trained with the combinatorial reward consistently outperform those trained with the sentence-level reward. <<</Combinatorial Reward>>> <<<Human Evaluation>>> We also conduct human evaluation to ensure robustness of our training procedure. We measure relevance and readability of the summaries. Relevance is based on the summary containing important, salient information from the input article, being correct by avoiding contradictory/unrelated information, and avoiding repeated/redundant information. Readability is based on the summarys fluency, grammaticality, and coherence. To evaluate both these criteria, we design a Amazon Mechanical Turk experiment based on ranking method, inspired by BIBREF36. We randomly select 20 samples from the CNN/Daily Mail test set and ask the human testers (3 for each sample) to rank summaries (for relevance and readability) produced by 3 different models: our final model, that of BIBREF8 and that of BIBREF1. 2, 1 and 0 points were given according to the ranking. The models were anonymized and randomly shuffled. Following previous work, the input article and ground truth summaries are also shown to the human participants in addition to the three model summaries. From the results shown in Table TABREF36, we can see that our model is better in relevance compared to others. In terms of readability, there was no noticeable difference. <<</Human Evaluation>>> <<</CNN/Daily Mail>>> <<<New York Times corpus>>> Table TABREF38 gives the results on NYT50 dataset. We see our BERT-ext + abs + RL outperforms all the extractive and abstractive models, except ROUGE-1 from BIBREF1. Comparing with two recent models that adapted BERT on their summarization models BIBREF1, BIBREF4, we can say that we proposed another method successfully leveraging BERT for summarization. In addition, the experiment proves the effectiveness of our RL training, with about 2 point improvement for each ROUGE metric. <<</New York Times corpus>>> <<<DUC-2002>>> We also evaluated the models trained on the CNN/Daily Mail dataset on the out-of-domain DUC-2002 test set as shown in Table TABREF41. BERT-ext + abs + RL outperforms baseline models with large margins on all of the ROUGE scores. This result shows that our model generalizes better. <<</DUC-2002>>> <<</Results>>> <<<Related Work>>> There has been a variety of deep neural network models for abstractive document summarization. One of the most dominant structures is the sequence-to-sequence (seq2seq) models with attention mechanism BIBREF37, BIBREF38, BIBREF39. BIBREF25 introduced Pointer Generator network that implicitly combines the abstraction with the extraction, using copy mechanism BIBREF40, BIBREF41. More recently, there have been several studies that have attempted to improve the performance of the abstractive summarization by explicitly combining them with extractive models. Some notable examples include the use of inconsistency loss BIBREF9, key phrase extraction BIBREF42, BIBREF10, and sentence extraction with rewriting BIBREF8. Our model improves Sentence Rewriting with BERT as an extractor and summary-level rewards to optimize the extractor. Reinforcement learning has been shown to be effective to directly optimize a non-differentiable objective in language generation including text summarization BIBREF43, BIBREF27, BIBREF35, BIBREF44, BIBREF11. BIBREF27 use actor-critic methods for language generation, using reward shaping BIBREF12 to solve the sparsity of training signals. Inspired by this, we generalize it to sentence extraction to give per step reward preserving optimality. <<</Related Work>>> <<<Conclusions>>> We have improved Sentence Rewriting approaches for abstractive summarization, proposing a novel extractor architecture exploiting BERT and a novel training procedure which globally optimizes summary-level ROUGE metric. Our approach achieves the new state-of-the-art on both CNN/Daily Mail and New York Times datasets as well as much better generalization on DUC-2002 test set. <<</Conclusions>>> <<</Title>>>
{ "references": [ "Title\nAbstract\nIntroduction\nBackground\nSentence Rewriting\nLearning Sentence Selection\nPre-trained Transformers\nModel\nExtractor Network\nLeveraging Pre-trained Transformers\nSentence Selection\nAbstractor Network\nTraining\nTraining Submodules\nExtractor Pre-training\nAbstractor Training\nGuiding to the Optimal Policy\nExperimental Setup\nDatasets\nImplementation Details\nEvaluation\nResults\nCNN/Daily Mail\nExtractive Summarization\nAbstractive Summarization\nRedundancy Control\nCombinatorial Reward\nHuman Evaluation\nNew York Times corpus\nDUC-2002\nRelated Work\nConclusions" ], "type": "outline" }