id
stringclasses 179
values | question
stringlengths 8.75k
85.9k
| answer
dict |
|---|---|---|
1909.05246
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Self-Attentional Models Application in Task-Oriented Dialogue Generation Systems
<<<Abstract>>>
Self-attentional models are a new paradigm for sequence modelling tasks which differ from common sequence modelling methods, such as recurrence-based and convolution-based sequence learning, in the way that their architecture is only based on the attention mechanism. Self-attentional models have been used in the creation of the state-of-the-art models in many NLP tasks such as neural machine translation, but their usage has not been explored for the task of training end-to-end task-oriented dialogue generation systems yet. In this study, we apply these models on the three different datasets for training task-oriented chatbots. Our finding shows that self-attentional models can be exploited to create end-to-end task-oriented chatbots which not only achieve higher evaluation scores compared to recurrence-based models, but also do so more efficiently.
<<</Abstract>>>
<<<Introduction>>>
Task-oriented chatbots are a type of dialogue generation system which tries to help the users accomplish specific tasks, such as booking a restaurant table or buying movie tickets, in a continuous and uninterrupted conversational interface and usually in as few steps as possible. The development of such systems falls into the Conversational AI domain which is the science of developing agents which are able to communicate with humans in a natural way BIBREF0. Digital assistants such as Apple's Siri, Google Assistant, Amazon Alexa, and Alibaba's AliMe are examples of successful chatbots developed by giant companies to engage with their customers.
There are mainly two different ways to create a task-oriented chatbot which are either using set of hand-crafted and carefully-designed rules or use corpus-based method in which the chatbot can be trained with a relatively large corpus of conversational data. Given the abundance of dialogue data, the latter method seems to be a better and a more general approach for developing task-oriented chatbots. The corpus-based method also falls into two main chatbot design architectures which are pipelined and end-to-end architectures BIBREF1. End-to-end chatbots are usually neural networks based BIBREF2, BIBREF3, BIBREF4, BIBREF5 and thus can be adapted to new domains by training on relevant dialogue datasets for that specific domain. Furthermore, all sequence modelling methods can also be used in training end-to-end task-oriented chatbots. A sequence modelling method receives a sequence as input and predicts another sequence as output. For example in the case of machine translation the input could be a sequence of words in a given language and the output would be a sentence in a second language. In a dialogue system, an utterance is the input and the predicted sequence of words would be the corresponding response.
Self-attentional models are a new paradigm for sequence modelling tasks which differ from common sequence modelling methods, such as recurrence-based and convolution-based sequence learning, in the way that their architecture is only based on the attention mechanism. The Transformer BIBREF6 and Universal Transformer BIBREF7 models are the first models that entirely rely on the self-attention mechanism for both encoder and decoder, and that is why they are also referred to as a self-attentional models. The Transformer models has produced state-of-the-art results in the task neural machine translation BIBREF6 and this encouraged us to further investigate this model for the task of training task-oriented chatbots. While in the Transformer model there is no recurrence, it turns out that the recurrence used in RNN models is essential for some tasks in NLP including language understanding tasks and thus the Transformer fails to generalize in those tasks BIBREF7. We also investigate the usage of the Universal Transformer for this task to see how it compares to the Transformer model.
We focus on self-attentional sequence modelling for this study and intend to provide an answer for one specific question which is:
How effective are self-attentional models for training end-to-end task-oriented chatbots?
Our contribution in this study is as follows:
We train end-to-end task-oriented chatbots using both self-attentional models and common recurrence-based models used in sequence modelling tasks and compare and analyze the results using different evaluation metrics on three different datasets.
We provide insight into how effective are self-attentional models for this task and benchmark the time performance of these models against the recurrence-based sequence modelling methods.
We try to quantify the effectiveness of self-attention mechanism in self-attentional models and compare its effect to recurrence-based models for the task of training end-to-end task-oriented chatbots.
<<</Introduction>>>
<<<Related Work>>>
<<<Task-Oriented Chatbots Architectures>>>
End-to-end architectures are among the most used architectures for research in the field of conversational AI. The advantage of using an end-to-end architecture is that one does not need to explicitly train different components for language understanding and dialogue management and then concatenate them together. Network-based end-to-end task-oriented chatbots as in BIBREF4, BIBREF8 try to model the learning task as a policy learning method in which the model learns to output a proper response given the current state of the dialogue. As discussed before, all encoder-decoder sequence modelling methods can be used for training end-to-end chatbots. Eric and Manning eric2017copy use the copy mechanism augmentation on simple recurrent neural sequence modelling and achieve good results in training end-to-end task-oriented chatbots BIBREF9.
Another popular method for training chatbots is based on memory networks. Memory networks augment the neural networks with task-specific memories which the model can learn to read and write. Memory networks have been used in BIBREF8 for training task-oriented agents in which they store dialogue context in the memory module, and then the model uses it to select a system response (also stored in the memory module) from a set of candidates. A variation of Key-value memory networks BIBREF10 has been used in BIBREF11 for the training task-oriented chatbots which stores the knowledge base in the form of triplets (which is (subject,relation,object) such as (yoga,time,3pm)) in the key-value memory network and then the model tries to select the most relevant entity from the memory and create a relevant response. This approach makes the interaction with the knowledge base smoother compared to other models.
Another approach for training end-to-end task-oriented dialogue systems tries to model the task-oriented dialogue generation in a reinforcement learning approach in which the current state of the conversation is passed to some sequence learning network, and this network decides the action which the chatbot should act upon. End-to-end LSTM based model BIBREF12, and the Hybrid Code Networks BIBREF13 can use both supervised and reinforcement learning approaches for training task-oriented chatbots.
<<</Task-Oriented Chatbots Architectures>>>
<<<Sequence Modelling Methods>>>
Sequence modelling methods usually fall into recurrence-based, convolution-based, and self-attentional-based methods. In recurrence-based sequence modeling, the words are fed into the model in a sequential way, and the model learns the dependencies between the tokens given the context from the past (and the future in case of bidirectional Recurrent Neural Networks (RNNs)) BIBREF14. RNNs and their variations such as Long Short-term Memory (LSTM) BIBREF15, and Gated Recurrent Units (GRU) BIBREF16 are the most widely used recurrence-based models used in sequence modelling tasks. Convolution-based sequence modelling methods rely on Convolutional Neural Networks (CNN) BIBREF17 which are mostly used for vision tasks but can also be used for handling sequential data. In CNN-based sequence modelling, multiple CNN layers are stacked on top of each other to give the model the ability to learn long-range dependencies. The stacking of layers in CNNs for sequence modeling allows the model to grow its receptive field, or in other words context size, and thus can model complex dependencies between different sections of the input sequence BIBREF18, BIBREF19. WaveNet van2016wavenet, used in audio synthesis, and ByteNet kalchbrenner2016neural, used in machine translation tasks, are examples of models trained using convolution-based sequence modelling.
<<</Sequence Modelling Methods>>>
<<</Related Work>>>
<<<Models>>>
We compare the most commonly used recurrence-based models for sequence modelling and contrast them with Transformer and Universal Transformer models. The models that we train are:
<<<LSTM and Bi-Directional LSTM>>>
Long Short-term Memory (LSTM) networks are a special kind of RNN networks which can learn long-term dependencies BIBREF15. RNN models suffer from the vanishing gradient problem BIBREF20 which makes it hard for RNN models to learn long-term dependencies. The LSTM model tackles this problem by defining a gating mechanism which introduces input, output and forget gates, and the model has the ability to decide how much of the previous information it needs to keep and how much of the new information it needs to integrate and thus this mechanism helps the model keep track of long-term dependencies.
Bi-directional LSTMs BIBREF21 are a variation of LSTMs which proved to give better results for some NLP tasks BIBREF22. The idea behind a Bi-directional LSTM is to give the network (while training) the ability to not only look at past tokens, like LSTM does, but to future tokens, so the model has access to information both form the past and future. In the case of a task-oriented dialogue generation systems, in some cases, the information needed so that the model learns the dependencies between the tokens, comes from the tokens that are ahead of the current index, and if the model is able to take future tokens into accounts it can learn more efficiently.
<<</LSTM and Bi-Directional LSTM>>>
<<<Transformer>>>
As discussed before, Transformer is the first model that entirely relies on the self-attention mechanism for both the encoder and the decoder. The Transformer uses the self-attention mechanism to learn a representation of a sentence by relating different positions of that sentence. Like many of the sequence modelling methods, Transformer follows the encoder-decoder architecture in which the input is given to the encoder and the results of the encoder is passed to the decoder to create the output sequence. The difference between Transformer (which is a self-attentional model) and other sequence models (such as recurrence-based and convolution-based) is that the encoder and decoder architecture is only based on the self-attention mechanism. The Transformer also uses multi-head attention which intends to give the model the ability to look at different representations of the different positions of both the input (encoder self-attention), output (decoder self-attention) and also between input and output (encoder-decoder attention) BIBREF6. It has been used in a variety of NLP tasks such as mathematical language understanding [110], language modeling BIBREF23, machine translation BIBREF6, question answering BIBREF24, and text summarization BIBREF25.
<<</Transformer>>>
<<<Universal Transformer>>>
The Universal Transformer model is an encoder-decoder-based sequence-to-sequence model which applies recurrence to the representation of each of the positions of the input and output sequences. The main difference between the RNN recurrence and the Universal Transformer recurrence is that the recurrence used in the Universal Transformer is applied on consecutive representation vectors of each token in the sequence (i.e., over depth) whereas in the RNN models this recurrence is applied on positions of the tokens in the sequence. A variation of the Universal Transformer, called Adaptive Universal Transformer, applies the Adaptive Computation Time (ACT) BIBREF26 technique on the Universal Transformer model which makes the model train faster since it saves computation time and also in some cases can increase the model accuracy. The ACT allows the Universal Transformer model to use different recurrence time steps for different tokens.
We know, based on reported evidence that transformers are potent in NLP tasks like translation and question answering. Our aim is to assess the applicability and effectiveness of transformers and universal-transformers in the domain of task-oriented conversational agents. In the next section, we report on experiments to investigate the usage of self-attentional models performance against the aforementioned models for the task of training end-to-end task-oriented chatbots.
<<</Universal Transformer>>>
<<</Models>>>
<<<Experiments>>>
We run our experiments on Tesla 960M Graphical Processing Unit (GPU). We evaluated the models using the aforementioned metrics and also applied early stopping (with delta set to 0.1 for 600 training steps).
<<<Datasets>>>
We use three different datasets for training the models. We use the Dialogue State Tracking Competition 2 (DSTC2) dataset BIBREF27 which is the most widely used dataset for research on task-oriented chatbots. We also used two other datasets recently open-sourced by Google Research BIBREF28 which are M2M-sim-M (dataset in movie domain) and M2M-sim-R (dataset in restaurant domain). M2M stands for Machines Talking to Machines which refers to the framework with which these two datasets were created. In this framework, dialogues are created via dialogue self-play and later augmented via crowdsourcing. We trained on our models on different datasets in order to make sure the results are not corpus-biased. Table TABREF12 shows the statistics of these three datasets which we will use to train and evaluate the models.
The M2M dataset has more diversity in both language and dialogue flow compared to the the commonly used DSTC2 dataset which makes it appealing for the task of creating task-oriented chatbots. This is also the reason that we decided to use M2M dataset in our experiments to see how well models can handle a more diversed dataset.
<<<Dataset Preparation>>>
We followed the data preparation process used for feeding the conversation history into the encoder-decoder as in BIBREF5. Consider a sample dialogue $D$ in the corpus which consists of a number of turns exchanged between the user and the system. $D$ can be represented as ${(u_1, s_1),(u_2, s_2), ...,(u_k, s_k)}$ where $k$ is the number of turns in this dialogue. At each time step in the conversation, we encode the conversation turns up to that time step, which is the context of the dialogue so far, and the system response after that time step will be used as the target. For example, given we are processing the conversation at time step $i$, the context of the conversation so far would be ${(u_1, s_1, u_2, s_2, ..., u_i)}$ and the model has to learn to output ${(s_i)}$ as the target.
<<</Dataset Preparation>>>
<<</Datasets>>>
<<<Training>>>
We used the tensor2tensor library BIBREF29 in our experiments for training and evaluation of sequence modeling methods. We use Adam optimizer BIBREF30 for training the models. We set $\beta _1=0.9$, $\beta _2=0.997$, and $\epsilon =1e-9$ for the Adam optimizer and started with learning rate of 0.2 with noam learning rate decay schema BIBREF6. In order to avoid overfitting, we use dropout BIBREF31 with dropout chosen from [0.7-0.9] range. We also conducted early stopping BIBREF14 to avoid overfitting in our experiments as the regularization methods. We set the batch size to 4096, hidden size to 128, and the embedding size to 128 for all the models. We also used grid search for hyperparameter tuning for all of the trained models. Details of our training and hyperparameter tuning and the code for reproducing the results can be found in the chatbot-exp github repository.
<<</Training>>>
<<<Inference>>>
In the inference time, there are mainly two methods for decoding which are greedy and beam search BIBREF32. Beam search has been proved to be an essential part in generative NLP task such as neural machine translation BIBREF33. In the case of dialogue generation systems, beam search could help alleviate the problem of having many possible valid outputs which do not match with the target but are valid and sensible outputs. Consider the case in which a task-oriented chatbot, trained for a restaurant reservation task, in response to the user utterance “Persian food”, generates the response “what time and day would you like the reservation for?” but the target defined for the system is “would you like a fancy restaurant?”. The response generated by the chatbot is a valid response which asks the user about other possible entities but does not match with the defined target.
We try to alleviate this problem in inference time by applying the beam search technique with a different beam size $\alpha \in \lbrace 1, 2, 4\rbrace $ and pick the best result based on the BLEU score. Note that when $\alpha = 1$, we are using the original greedy search method for the generation task.
<<</Inference>>>
<<<Evaluation Measures>>>
BLEU: We use the Bilingual Evaluation Understudy (BLEU) BIBREF34 metric which is commonly used in machine translation tasks. The BLEU metric can be used to evaluate dialogue generation models as in BIBREF5, BIBREF35. The BLEU metric is a word-overlap metric which computes the co-occurrence of N-grams in the reference and the generated response and also applies the brevity penalty which tries to penalize far too short responses which are usually not desired in task-oriented chatbots. We compute the BLEU score using all generated responses of our systems.
Per-turn Accuracy: Per-turn accuracy measures the similarity of the system generated response versus the target response. Eric and Manning eric2017copy used this metric to evaluate their systems in which they considered their response to be correct if all tokens in the system generated response matched the corresponding token in the target response. This metric is a little bit harsh, and the results may be low since all the tokens in the generated response have to be exactly in the same position as in the target response.
Per-Dialogue Accuracy: We calculate per-dialogue accuracy as used in BIBREF8, BIBREF5. For this metric, we consider all the system generated responses and compare them to the target responses. A dialogue is considered to be true if all the turns in the system generated responses match the corresponding turns in the target responses. Note that this is a very strict metric in which all the utterances in the dialogue should be the same as the target and in the right order.
F1-Entity Score: Datasets used in task-oriented chores have a set of entities which represent user preferences. For example, in the restaurant domain chatbots common entities are meal, restaurant name, date, time and the number of people (these are usually the required entities which are crucial for making reservations, but there could be optional entities such as location or rating). Each target response has a set of entities which the system asks or informs the user about. Our models have to be able to discern these specific entities and inject them into the generated response. To evaluate our models we could use named-entity recognition evaluation metrics BIBREF36. The F1 score is the most commonly used metric used for the evaluation of named-entity recognition models which is the harmonic average of precision and recall of the model. We calculate this metric by micro-averaging over all the system generated responses.
<<</Evaluation Measures>>>
<<</Experiments>>>
<<<Results and Discussion>>>
<<<Comparison of Models>>>
The results of running the experiments for the aforementioned models is shown in Table TABREF14 for the DSTC2 dataset and in Table TABREF18 for the M2M datasets. The bold numbers show the best performing model in each of the evaluation metrics. As discussed before, for each model we use different beam sizes (bs) in inference time and report the best one. Our findings in Table TABREF14 show that self-attentional models outperform common recurrence-based sequence modelling methods in the BLEU, Per-turn accuracy, and entity F1 score. The reduction in the evalution numbers for the M2M dataset and in our investigation of the trained model we found that this considerable reduction is due to the fact that the diversity of M2M dataset is considerably more compared to DSTC2 dataset while the traning corpus size is smaller.
<<</Comparison of Models>>>
<<<Time Performance Comparison>>>
Table TABREF22 shows the time performance of the models trained on DSTC2 dataset. Note that in order to get a fair time performance comparison, we trained the models with the same batch size (4096) and on the same GPU. These numbers are for the best performing model (in terms of evaluation loss and selected using the early stopping method) for each of the sequence modelling methods. Time to Convergence (T2C) shows the approximate time that the model was trained to converge. We also show the loss in the development set for that specific checkpoint.
<<</Time Performance Comparison>>>
<<<Effect of (Self-)Attention Mechanism>>>
As discussed before in Section SECREF8, self-attentional models rely on the self-attention mechanism for sequence modelling. Recurrence-based models such as LSTM and Bi-LSTM can also be augmented in order to increase their performance, as evident in Table TABREF14 which shows the increase in the performance of both LSTM and Bi-LSTM when augmented with an attention mechanism. This leads to the question whether we can increase the performance of recurrence-based models by adding multiple attention heads, similar to the multi-head self-attention mechanism used in self-attentional models, and outperform the self-attentional models.
To investigate this question, we ran a number of experiments in which we added multiple attention heads on top of Bi-LSTM model and also tried a different number of self-attention heads in self-attentional models in order to compare their performance for this specific task. Table TABREF25 shows the results of these experiments. Note that the models in Table TABREF25 are actually the best models that we found in our experiments on DSTC2 dataset and we only changed one parameter for each of them, i.e. the number of attention heads in the recurrence-based models and the number of self-attention heads in the self-attentional models, keeping all other parameters unchanged. We also report the results of models with beam size of 2 in inference time. We increased the number of attention heads in the Bi-LSTM model up to 64 heads to see its performance change. Note that increasing the number of attention heads makes the training time intractable and time consuming while the model size would increase significantly as shown in Table TABREF24. Furthermore, by observing the results of the Bi-LSTM+Att model in Table TABREF25 (both test and development set) we can see that Bi-LSTM performance decreases and thus there is no need to increase the attention heads further.
Our findings in Table TABREF25 show that the self-attention mechanism can outperform recurrence-based models even if the recurrence-based models have multiple attention heads. The Bi-LSTM model with 64 attention heads cannot beat the best Trasnformer model with NH=4 and also its results are very close to the Transformer model with NH=1. This observation clearly depicts the power of self-attentional based models and demonstrates that the attention mechanism used in self-attentional models as the backbone for learning, outperforms recurrence-based models even if they are augmented with multiple attention heads.
<<</Effect of (Self-)Attention Mechanism>>>
<<</Results and Discussion>>>
<<<Conclusion and Future Work>>>
We have determined that Transformers and Universal-Transformers are indeed effective at generating appropriate responses in task-oriented chatbot systems. In actuality, their performance is even better than the typically used deep learning architectures. Our findings in Table TABREF14 show that self-attentional models outperform common recurrence-based sequence modelling methods in the BLEU, Per-turn accuracy, and entity F1 score. The results of the Transformer model beats all other models in all of the evaluation metrics. Also, comparing the result of LSTM and LSTM with attention mechanism as well as the Bi-LSTM with Bi-LSTM with attention mechanism, it can be observed in the results that adding the attention mechanism can increase the performance of the models. Comparing the results of self-attentional models shows that the Transformer model outperforms the other self-attentional models, while the Universal Transformer model gives reasonably good results.
In future work, it would be interesting to compare the performance of self-attentional models (specifically the winning Transformer model) against other end-to-end architectures such as the Memory Augmented Networks.
<<</Conclusion and Future Work>>>
<<</Title>>>
|
{
"references": [
"Introduction, Conclusion and Future Work"
],
"type": "disordered_section"
}
|
1908.06083
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Build it Break it Fix it for Dialogue Safety: Robustness from Adversarial Human Attack
<<<Abstract>>>
The detection of offensive language in the context of a dialogue has become an increasingly important application of natural language processing. The detection of trolls in public forums (Galan-Garcia et al., 2016), and the deployment of chatbots in the public domain (Wolf et al., 2017) are two examples that show the necessity of guarding against adversarially offensive behavior on the part of humans. In this work, we develop a training scheme for a model to become robust to such human attacks by an iterative build it, break it, fix it strategy with humans and models in the loop. In detailed experiments we show this approach is considerably more robust than previous systems. Further, we show that offensive language used within a conversation critically depends on the dialogue context, and cannot be viewed as a single sentence offensive detection task as in most previous work. Our newly collected tasks and methods will be made open source and publicly available.
<<</Abstract>>>
<<<Introduction>>>
The detection of offensive language has become an important topic as the online community has grown, as so too have the number of bad actors BIBREF2. Such behavior includes, but is not limited to, trolling in public discussion forums BIBREF3 and via social media BIBREF4, BIBREF5, employing hate speech that expresses prejudice against a particular group, or offensive language specifically targeting an individual. Such actions can be motivated to cause harm from which the bad actor derives enjoyment, despite negative consequences to others BIBREF6. As such, some bad actors go to great lengths to both avoid detection and to achieve their goals BIBREF7. In that context, any attempt to automatically detect this behavior can be expected to be adversarially attacked by looking for weaknesses in the detection system, which currently can easily be exploited as shown in BIBREF8, BIBREF9. A further example, relevant to the natural langauge processing community, is the exploitation of weaknesses in machine learning models that generate text, to force them to emit offensive language. Adversarial attacks on the Tay chatbot led to the developers shutting down the system BIBREF1.
In this work, we study the detection of offensive language in dialogue with models that are robust to adversarial attack. We develop an automatic approach to the “Build it Break it Fix it” strategy originally adopted for writing secure programs BIBREF10, and the “Build it Break it” approach consequently adapting it for NLP BIBREF11. In the latter work, two teams of researchers, “builders” and “breakers” were used to first create sentiment and semantic role-labeling systems and then construct examples that find their faults. In this work we instead fully automate such an approach using crowdworkers as the humans-in-the-loop, and also apply a fixing stage where models are retrained to improve them. Finally, we repeat the whole build, break, and fix sequence over a number of iterations.
We show that such an approach provides more and more robust systems over the fixing iterations. Analysis of the type of data collected in the iterations of the break it phase shows clear distribution changes, moving away from simple use of profanity and other obvious offensive words to utterances that require understanding of world knowledge, figurative language, and use of negation to detect if they are offensive or not. Further, data collected in the context of a dialogue rather than a sentence without context provides more sophisticated attacks. We show that model architectures that use the dialogue context efficiently perform much better than systems that do not, where the latter has been the main focus of existing research BIBREF12, BIBREF5, BIBREF13.
Code for our entire build it, break it, fix it algorithm will be made open source, complete with model training code and crowdsourcing interface for humans. Our data and trained models will also be made available for the community.
<<</Introduction>>>
<<<Related Work>>>
The task of detecting offensive language has been studied across a variety of content classes. Perhaps the most commonly studied class is hate speech, but work has also covered bullying, aggression, and toxic comments BIBREF13.
To this end, various datasets have been created to benchmark progress in the field. In hate speech detection, recently BIBREF5 compiled and released a dataset of over 24,000 tweets labeled as containing hate speech, offensive language, or neither. The TRAC shared task on Aggression Identification, a dataset of over 15,000 Facebook comments labeled with varying levels of aggression, was released as part of a competition BIBREF14. In order to benchmark toxic comment detection, The Wikipedia Toxic Comments dataset (which we study in this work) was collected and extracted from Wikipedia Talk pages and featured in a Kaggle competition BIBREF12, BIBREF15. Each of these benchmarks examine only single-turn utterances, outside of the context in which the language appeared. In this work we recommend that future systems should move beyond classification of singular utterances and use contextual information to help identify offensive language.
Many approaches have been taken to solve these tasks – from linear regression and SVMs to deep learning BIBREF16. The best performing systems in each of the competitions mentioned above (for aggression and toxic comment classification) used deep learning approaches such as LSTMs and CNNs BIBREF14, BIBREF15. In this work we consider a large-pretrained transformer model which has been shown to perform well on many downstream NLP tasks BIBREF17.
The broad class of adversarial training is currently a hot topic in machine learning BIBREF18. Use cases include training image generators BIBREF19 as well as image classifiers to be robust to adversarial examples BIBREF20. These methods find the breaking examples algorithmically, rather than by using humans breakers as we do. Applying the same approaches to NLP tends to be more challenging because, unlike for images, even small changes to a sentence can cause a large change in the meaning of that sentence, which a human can detect but a lower quality model cannot. Nevertheless algorithmic approaches have been attempted, for example in text classification BIBREF21, machine translation BIBREF22, dialogue generation tasks BIBREF23 and reading comprehension BIBREF24. The latter was particularly effective at proposing a more difficult version of the popular SQuAD dataset.
As mentioned in the introduction, our approach takes inspiration from “Build it Break it” approaches which have been successfully tried in other domains BIBREF10, BIBREF11. Those approaches advocate finding faults in systems by having humans look for insecurities (in software) or prediction failures (in models), but do not advocate an automated approach as we do here. Our work is also closely connected to the “Mechanical Turker Descent” algorithm detailed in BIBREF25 where language to action pairs were collected from crowdworkers by incentivizing them with a game-with-a-purpose technique: a crowdworker receives a bonus if their contribution results in better models than another crowdworker. We did not gamify our approach in this way, but still our approach has commonalities in the round-based improvement of models through crowdworker interaction.
<<</Related Work>>>
<<<Baselines: Wikipedia Toxic Comments>>>
In this section we describe the publicly available data that we have used to bootstrap our build it break it fix it approach. We also compare our model choices with existing work and clarify the metrics chosen to report our results.
<<<Wikipedia Toxic Comments>>>
The Wikipedia Toxic Comments dataset (WTC) has been collected in a common effort from the Wikimedia Foundation and Jigsaw BIBREF12 to identify personal attacks online. The data has been extracted from the Wikipedia Talk pages, discussion pages where editors can discuss improvements to articles or other Wikipedia pages. We considered the version of the dataset that corresponds to the Kaggle competition: “Toxic Comment Classification Challenge" BIBREF15 which features 7 classes of toxicity: toxic, severe toxic, obscene, threat, insult, identity hate and non-toxic. In the same way as in BIBREF26, every label except non-toxic is grouped into a class offensive while the non-toxic class is kept as the safe class. In order to compare our results to BIBREF26, we similarly split this dataset to dedicate 10% as a test set. 80% are dedicated to train set while the remaining 10% is used for validation. Statistics on the dataset are shown in Table TABREF4.
<<</Wikipedia Toxic Comments>>>
<<<Models>>>
We establish baselines using two models. The first one is a binary classifier built on top of a large pre-trained transformer model. We use the same architecture as in BERT BIBREF17. We add a linear layer to the output of the first token ([CLS]) to produce a final binary classification. We initialize the model using the weights provided by BIBREF17 corresponding to “BERT-base". The transformer is composed of 12 layers with hidden size of 768 and 12 attention heads. We fine-tune the whole network on the classification task. We also compare it the fastText classifier BIBREF27 for which a given sentence is encoded as the average of individual word vectors that are pre-trained on a large corpus issued from Wikipedia. A linear layer is then applied on top to yield a binary classification.
<<</Models>>>
<<<Experiments>>>
We compare the two aforementioned models with BIBREF26 who conducted their experiments with a BiLSTM with GloVe pre-trained word vectors BIBREF28. Results are listed in Table TABREF5 and we compare them using the weighted-F1, i.e. the sum of F1 score of each class weighted by their frequency in the dataset. We also report the F1 of the offensive-class which is the metric we favor within this work, although we report both. (Note that throughout the paper, the notation F1 is always referring to offensive-class F1.) Indeed, in the case of an imbalanced dataset such as Wikipedia Toxic Comments where most samples are safe, the weighted-F1 is closer to the F1 score of the safe class while we focus on detecting offensive content. Our BERT-based model outperforms the method from BIBREF26; throughout the rest of the paper, we use the BERT-based architecture in our experiments. In particular, we used this baseline trained on WTC to bootstrap our approach, to be described subsequently.
<<</Experiments>>>
<<</Baselines: Wikipedia Toxic Comments>>>
<<<Build it Break it Fix it Method>>>
In order to train models that are robust to adversarial behavior, we posit that it is crucial collect and train on data that was collected in an adversarial manner. We propose the following automated build it, break it, fix it algorithm:
Build it: Build a model capable of detecting offensive messages. This is our best-performing BERT-based model trained on the Wikipedia Toxic Comments dataset described in the previous section. We refer to this model throughout as $A_0$.
Break it: Ask crowdworkers to try to “beat the system" by submitting messages that our system ($A_0$) marks as safe but that the worker considers to be offensive.
Fix it: Train a new model on these collected examples in order to be more robust to these adversarial attacks.
Repeat: Repeat, deploying the newly trained model in the break it phase, then fix it again.
See Figure FIGREF6 for a visualization of this process.
<<<Break it Details>>>
<<<Definition of offensive>>>
Throughout data collection, we characterize offensive messages for users as messages that would not be “ok to send in a friendly conversation with someone you just met online." We use this specific language in an attempt to capture various classes of content that would be considered unacceptable in a friendly conversation, without imposing our own definitions of what that means. The phrase “with someone you just met online" was meant to mimic the setting of a public forum.
<<</Definition of offensive>>>
<<<Crowderworker Task>>>
We ask crowdworkers to try to “beat the system" by submitting messages that our system marks as safe but that the worker considers to be offensive. For a given round, workers earn a “game” point each time they are able to “beat the system," or in other words, trick the model by submitting offensive messages that the model marks as safe. Workers earn up to 5 points each round, and have two tries for each point: we allow multiple attempts per point so that workers can get feedback from the models and better understand their weaknesses. The points serve to indicate success to the crowdworker and motivate to achieve high scores, but have no other meaning (e.g. no monetary value as in BIBREF25). More details regarding the user interface and instructions can be found in Appendix SECREF9.
<<</Crowderworker Task>>>
<<<Models to Break>>>
During round 1, workers try to break the baseline model $A_0$, trained on Wikipedia Toxic Comments. For rounds $i$, $i > 1$, workers must break both the baseline model and the model from the previous “fix it" round, which we refer to as $A_{i-1}$. In that case, the worker must submit messages that both $A_0$ and $A_{i-1}$ mark as safe but which the worker considers to be offensive.
<<</Models to Break>>>
<<</Break it Details>>>
<<<Fix it Details>>>
During the “fix it" round, we update the models with the newly collected adversarial data from the “break it" round.
The training data consists of all previous rounds of data, so that model $A_i$ is trained on all rounds $n$ for $n \le i$, as well as the Wikipedia Toxic Comments data. We split each round of data into train, validation, and test partitions. The validation set is used for hyperparameter selection. The test sets are used to measure how robust we are to new adversarial attacks. With increasing round $i$, $A_i$ should become more robust to increasingly complex human adversarial attacks.
<<</Fix it Details>>>
<<</Build it Break it Fix it Method>>>
<<<Single-Turn Task>>>
We first consider a single-turn set-up, i.e. detection of offensive language in one utterance, with no dialogue context or conversational history.
<<<Data Collection>>>
<<<Adversarial Collection>>>
We collected three rounds of data with the build it, break it, fix it algorithm described in the previous section. Each round of data consisted of 1000 examples, leading to 3000 single-turn adversarial examples in total. For the remainder of the paper, we refer to this method of data collection as the adversarial method.
<<</Adversarial Collection>>>
<<<Standard Collection>>>
In addition to the adversarial method, we also collected data in a non-adversarial manner in order to directly compare the two set-ups. In this method – which we refer to as the standard method, we simply ask crowdworkers to submit messages that they consider to be offensive. There is no model to break. Instructions are otherwise the same.
In this set-up, there is no real notion of “rounds", but for the sake of comparison we refer to each subsequent 1000 examples collected in this manner as a “round". We collect 3000 examples – or three rounds of data. We refer to a model trained on rounds $n \le i$ of the standard data as $S_i$.
<<</Standard Collection>>>
<<<Task Formulation Details>>>
Since all of the collected examples are labeled as offensive, to make this task a binary classification problem, we will also add safe examples to it.
The “safe data" is comprised of utterances from the ConvAI2 chit-chat task BIBREF29, BIBREF30 which consists of pairs of humans getting to know each other by discussing their interests. Each utterance we used was reviewed by two independent crowdworkers and labeled as safe, with the same characterization of safe as described before.
For each partition (train, validation, test), the final task has a ratio of 9:1 safe to offensive examples, mimicking the division of the Wikipedia Toxic Comments dataset used for training our baseline models. Dataset statistics for the final task can be found in Table TABREF21. We refer to these tasks – with both safe and offensive examples – as the adversarial and standard tasks.
<<</Task Formulation Details>>>
<<<Model Training Details>>>
Using the BERT-based model architecture described in Section SECREF3, we trained models on each round of the standard and adversarial tasks, multi-tasking with the Wikipedia Toxic Comments task. We weight the multi-tasking with a mixing parameter which is also tuned on the validation set. Finally, after training weights with the cross entropy loss, we adjust the final bias also using the validation set. We optimize for the sensitive class (i.e. offensive-class) F1 metric on the standard and adversarial validation sets respectively.
For each task (standard and adversarial), on round $i$, we train on data from all rounds $n$ for $n \le i$ and optimize for performance on the validation sets $n \le i$.
<<</Model Training Details>>>
<<</Data Collection>>>
<<<Experimental Results>>>
We conduct experiments comparing the adversarial and standard methods. We break down the results into “break it" results comparing the data collected and “fix it" results comparing the models obtained.
<<<Break it Phase>>>
Examples obtained from both the adversarial and standard collection methods were found to be clearly offensive, but we note several differences in the distribution of examples from each task, shown in Table TABREF21. First, examples from the standard task tend to contain more profanity. Using a list of common English obscenities and otherwise bad words, in Table TABREF21 we calculate the percentage of examples in each task containing such obscenities, and see that the standard examples contain at least seven times as many as each round of the adversarial task. Additionally, in previous works, authors have observed that classifiers struggle with negations BIBREF8. This is borne out by our data: examples from the single-turn adversarial task more often contain the token “not" than examples from the standard task, indicating that users are easily able to fool the classifier with negations.
We also anecdotally see figurative language such as “snakes hiding in the grass” in the adversarial data, which contain no individually offensive words, the offensive nature is captured by reading the entire sentence. Other examples require sophisticated world knowledge such as that many cultures consider eating cats to be offensive. To quantify these differences, we performed a blind human annotation of a sample of the data, 100 examples of standard and 100 examples of adversarial round 1. Results are shown in Table TABREF16. Adversarial data was indeed found to contain less profanity, fewer non-profane but offending words (such as “idiot”), more figurative language, and to require more world knowledge.
We note that, as anticipated, the task becomes more challenging for the crowdworkers with each round, indicated by the decreasing average scores in Table TABREF27. In round 1, workers are able to get past $A_0$ most of the time – earning an average score of $4.56$ out of 5 points per round – showcasing how susceptible this baseline is to adversarial attack despite its relatively strong performance on the Wikipedia Toxic Comments task. By round 3, however, workers struggle to trick the system, earning an average score of only $1.6$ out of 5. A finer-grained assessment of the worker scores can be found in Table TABREF38 in the appendix.
<<</Break it Phase>>>
<<<Fix it Phase>>>
Results comparing the performance of models trained on the adversarial ($A_i$) and standard ($S_i$) tasks are summarized in Table TABREF22, with further results in Table TABREF41 in Appendix SECREF40. The adversarially trained models $A_i$ prove to be more robust to adversarial attack: on each round of adversarial testing they outperform standard models $S_i$.
Further, note that the adversarial task becomes harder with each subsequent round. In particular, the performance of the standard models $S_i$ rapidly deteriorates between round 1 and round 2 of the adversarial task. This is a clear indication that models need to train on adversarially-collected data to be robust to adversarial behavior.
Standard models ($S_i$), trained on the standard data, tend to perform similarly to the adversarial models ($A_i$) as measured on the standard test sets, with the exception of training round 3, in which $A_3$ fails to improve on this task, likely due to being too optimized for adversarial tasks. The standard models $S_i$, on the other hand, are improving with subsequent rounds as they have more training data of the same distribution as the evaluation set. Similarly, our baseline model performs best on its own test set, but other models are not far behind.
Finally, we remark that all scores of 0 in Table TABREF22 are by design, as for round $i$ of the adversarial task, both $A_0$ and $A_{i-1}$ classified each example as safe during the `break it' data collection phase.
<<</Fix it Phase>>>
<<</Experimental Results>>>
<<</Single-Turn Task>>>
<<<Multi-Turn Task>>>
In most real-world applications, we find that adversarial behavior occurs in context – whether it is in the context of a one-on-one conversation, a comment thread, or even an image. In this work we focus on offensive utterances within the context of two-person dialogues. For dialogue safety we posit it is important to move beyond classifying single utterances, as it may be the case that an utterance is entirely innocuous on its own but extremely offensive in the context of the previous dialogue history. For instance, “Yes, you should definitely do it!" is a rather inoffensive message by itself, but most would agree that it is a hurtful response to the question “Should I hurt myself?"
<<<Task Implementation>>>
To this end, we collect data by asking crowdworkers to try to “beat" our best single-turn classifier (using the model that performed best on rounds 1-3 of the adversarial task, i.e., $A_3$), in addition to our baseline classifier $A_0$. The workers are shown truncated pieces of a conversation from the ConvAI2 chit-chat task, and asked to continue the conversation with offensive responses that our classifier marks as safe. As before, workers have two attempts per conversation to try to get past the classifier and are shown five conversations per round. They are given a score (out of five) at the end of each round indicating the number of times they successfully fooled the classifier.
We collected 3000 offensive examples in this manner. As in the single-turn set up, we combine this data with safe examples with a ratio of 9:1 safe to offensive for classifier training. The safe examples are dialogue examples from ConvAI2 for which the responses were reviewed by two independent crowdworkers and labeled as safe, as in the s single-turn task set-up. We refer to this overall task as the multi-turn adversarial task. Dataset statistics are given in Table TABREF30.
<<</Task Implementation>>>
<<</Multi-Turn Task>>>
<<<Conclusion>>>
We have presented an approach to build more robust offensive language detection systems in the context of a dialogue. We proposed a build it, break it, fix it, and then repeat strategy, whereby humans attempt to break the models we built, and we use the broken examples to fix the models. We show this results in far more nuanced language than in existing datasets. The adversarial data includes less profanity, which existing classifiers can pick up on, and is instead offensive due to figurative language, negation, and by requiring more world knowledge, which all make current classifiers fail. Similarly, offensive language in the context of a dialogue is also more nuanced than stand-alone offensive utterances. We show that classifiers that learn from these more complex examples are indeed more robust to attack, and that using the dialogue context gives improved performance if the model architecture takes it into account.
In this work we considered a binary problem (offensive or safe). Future work could consider classes of offensive language separately BIBREF13, or explore other dialogue tasks, e.g. from social media or forums. Another interesting direction is to explore how our build it, break it, fix it strategy would similarly apply to make neural generative models safe BIBREF31.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Related Work, Conclusion"
],
"type": "disordered_section"
}
|
1908.06083
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Build it Break it Fix it for Dialogue Safety: Robustness from Adversarial Human Attack
<<<Abstract>>>
The detection of offensive language in the context of a dialogue has become an increasingly important application of natural language processing. The detection of trolls in public forums (Galan-Garcia et al., 2016), and the deployment of chatbots in the public domain (Wolf et al., 2017) are two examples that show the necessity of guarding against adversarially offensive behavior on the part of humans. In this work, we develop a training scheme for a model to become robust to such human attacks by an iterative build it, break it, fix it strategy with humans and models in the loop. In detailed experiments we show this approach is considerably more robust than previous systems. Further, we show that offensive language used within a conversation critically depends on the dialogue context, and cannot be viewed as a single sentence offensive detection task as in most previous work. Our newly collected tasks and methods will be made open source and publicly available.
<<</Abstract>>>
<<<Introduction>>>
The detection of offensive language has become an important topic as the online community has grown, as so too have the number of bad actors BIBREF2. Such behavior includes, but is not limited to, trolling in public discussion forums BIBREF3 and via social media BIBREF4, BIBREF5, employing hate speech that expresses prejudice against a particular group, or offensive language specifically targeting an individual. Such actions can be motivated to cause harm from which the bad actor derives enjoyment, despite negative consequences to others BIBREF6. As such, some bad actors go to great lengths to both avoid detection and to achieve their goals BIBREF7. In that context, any attempt to automatically detect this behavior can be expected to be adversarially attacked by looking for weaknesses in the detection system, which currently can easily be exploited as shown in BIBREF8, BIBREF9. A further example, relevant to the natural langauge processing community, is the exploitation of weaknesses in machine learning models that generate text, to force them to emit offensive language. Adversarial attacks on the Tay chatbot led to the developers shutting down the system BIBREF1.
In this work, we study the detection of offensive language in dialogue with models that are robust to adversarial attack. We develop an automatic approach to the “Build it Break it Fix it” strategy originally adopted for writing secure programs BIBREF10, and the “Build it Break it” approach consequently adapting it for NLP BIBREF11. In the latter work, two teams of researchers, “builders” and “breakers” were used to first create sentiment and semantic role-labeling systems and then construct examples that find their faults. In this work we instead fully automate such an approach using crowdworkers as the humans-in-the-loop, and also apply a fixing stage where models are retrained to improve them. Finally, we repeat the whole build, break, and fix sequence over a number of iterations.
We show that such an approach provides more and more robust systems over the fixing iterations. Analysis of the type of data collected in the iterations of the break it phase shows clear distribution changes, moving away from simple use of profanity and other obvious offensive words to utterances that require understanding of world knowledge, figurative language, and use of negation to detect if they are offensive or not. Further, data collected in the context of a dialogue rather than a sentence without context provides more sophisticated attacks. We show that model architectures that use the dialogue context efficiently perform much better than systems that do not, where the latter has been the main focus of existing research BIBREF12, BIBREF5, BIBREF13.
Code for our entire build it, break it, fix it algorithm will be made open source, complete with model training code and crowdsourcing interface for humans. Our data and trained models will also be made available for the community.
<<</Introduction>>>
<<<Related Work>>>
The task of detecting offensive language has been studied across a variety of content classes. Perhaps the most commonly studied class is hate speech, but work has also covered bullying, aggression, and toxic comments BIBREF13.
To this end, various datasets have been created to benchmark progress in the field. In hate speech detection, recently BIBREF5 compiled and released a dataset of over 24,000 tweets labeled as containing hate speech, offensive language, or neither. The TRAC shared task on Aggression Identification, a dataset of over 15,000 Facebook comments labeled with varying levels of aggression, was released as part of a competition BIBREF14. In order to benchmark toxic comment detection, The Wikipedia Toxic Comments dataset (which we study in this work) was collected and extracted from Wikipedia Talk pages and featured in a Kaggle competition BIBREF12, BIBREF15. Each of these benchmarks examine only single-turn utterances, outside of the context in which the language appeared. In this work we recommend that future systems should move beyond classification of singular utterances and use contextual information to help identify offensive language.
Many approaches have been taken to solve these tasks – from linear regression and SVMs to deep learning BIBREF16. The best performing systems in each of the competitions mentioned above (for aggression and toxic comment classification) used deep learning approaches such as LSTMs and CNNs BIBREF14, BIBREF15. In this work we consider a large-pretrained transformer model which has been shown to perform well on many downstream NLP tasks BIBREF17.
The broad class of adversarial training is currently a hot topic in machine learning BIBREF18. Use cases include training image generators BIBREF19 as well as image classifiers to be robust to adversarial examples BIBREF20. These methods find the breaking examples algorithmically, rather than by using humans breakers as we do. Applying the same approaches to NLP tends to be more challenging because, unlike for images, even small changes to a sentence can cause a large change in the meaning of that sentence, which a human can detect but a lower quality model cannot. Nevertheless algorithmic approaches have been attempted, for example in text classification BIBREF21, machine translation BIBREF22, dialogue generation tasks BIBREF23 and reading comprehension BIBREF24. The latter was particularly effective at proposing a more difficult version of the popular SQuAD dataset.
As mentioned in the introduction, our approach takes inspiration from “Build it Break it” approaches which have been successfully tried in other domains BIBREF10, BIBREF11. Those approaches advocate finding faults in systems by having humans look for insecurities (in software) or prediction failures (in models), but do not advocate an automated approach as we do here. Our work is also closely connected to the “Mechanical Turker Descent” algorithm detailed in BIBREF25 where language to action pairs were collected from crowdworkers by incentivizing them with a game-with-a-purpose technique: a crowdworker receives a bonus if their contribution results in better models than another crowdworker. We did not gamify our approach in this way, but still our approach has commonalities in the round-based improvement of models through crowdworker interaction.
<<</Related Work>>>
<<<Baselines: Wikipedia Toxic Comments>>>
In this section we describe the publicly available data that we have used to bootstrap our build it break it fix it approach. We also compare our model choices with existing work and clarify the metrics chosen to report our results.
<<<Wikipedia Toxic Comments>>>
The Wikipedia Toxic Comments dataset (WTC) has been collected in a common effort from the Wikimedia Foundation and Jigsaw BIBREF12 to identify personal attacks online. The data has been extracted from the Wikipedia Talk pages, discussion pages where editors can discuss improvements to articles or other Wikipedia pages. We considered the version of the dataset that corresponds to the Kaggle competition: “Toxic Comment Classification Challenge" BIBREF15 which features 7 classes of toxicity: toxic, severe toxic, obscene, threat, insult, identity hate and non-toxic. In the same way as in BIBREF26, every label except non-toxic is grouped into a class offensive while the non-toxic class is kept as the safe class. In order to compare our results to BIBREF26, we similarly split this dataset to dedicate 10% as a test set. 80% are dedicated to train set while the remaining 10% is used for validation. Statistics on the dataset are shown in Table TABREF4.
<<</Wikipedia Toxic Comments>>>
<<<Models>>>
We establish baselines using two models. The first one is a binary classifier built on top of a large pre-trained transformer model. We use the same architecture as in BERT BIBREF17. We add a linear layer to the output of the first token ([CLS]) to produce a final binary classification. We initialize the model using the weights provided by BIBREF17 corresponding to “BERT-base". The transformer is composed of 12 layers with hidden size of 768 and 12 attention heads. We fine-tune the whole network on the classification task. We also compare it the fastText classifier BIBREF27 for which a given sentence is encoded as the average of individual word vectors that are pre-trained on a large corpus issued from Wikipedia. A linear layer is then applied on top to yield a binary classification.
<<</Models>>>
<<<Experiments>>>
We compare the two aforementioned models with BIBREF26 who conducted their experiments with a BiLSTM with GloVe pre-trained word vectors BIBREF28. Results are listed in Table TABREF5 and we compare them using the weighted-F1, i.e. the sum of F1 score of each class weighted by their frequency in the dataset. We also report the F1 of the offensive-class which is the metric we favor within this work, although we report both. (Note that throughout the paper, the notation F1 is always referring to offensive-class F1.) Indeed, in the case of an imbalanced dataset such as Wikipedia Toxic Comments where most samples are safe, the weighted-F1 is closer to the F1 score of the safe class while we focus on detecting offensive content. Our BERT-based model outperforms the method from BIBREF26; throughout the rest of the paper, we use the BERT-based architecture in our experiments. In particular, we used this baseline trained on WTC to bootstrap our approach, to be described subsequently.
<<</Experiments>>>
<<</Baselines: Wikipedia Toxic Comments>>>
<<<Build it Break it Fix it Method>>>
In order to train models that are robust to adversarial behavior, we posit that it is crucial collect and train on data that was collected in an adversarial manner. We propose the following automated build it, break it, fix it algorithm:
Build it: Build a model capable of detecting offensive messages. This is our best-performing BERT-based model trained on the Wikipedia Toxic Comments dataset described in the previous section. We refer to this model throughout as $A_0$.
Break it: Ask crowdworkers to try to “beat the system" by submitting messages that our system ($A_0$) marks as safe but that the worker considers to be offensive.
Fix it: Train a new model on these collected examples in order to be more robust to these adversarial attacks.
Repeat: Repeat, deploying the newly trained model in the break it phase, then fix it again.
See Figure FIGREF6 for a visualization of this process.
<<<Break it Details>>>
<<<Definition of offensive>>>
Throughout data collection, we characterize offensive messages for users as messages that would not be “ok to send in a friendly conversation with someone you just met online." We use this specific language in an attempt to capture various classes of content that would be considered unacceptable in a friendly conversation, without imposing our own definitions of what that means. The phrase “with someone you just met online" was meant to mimic the setting of a public forum.
<<</Definition of offensive>>>
<<<Crowderworker Task>>>
We ask crowdworkers to try to “beat the system" by submitting messages that our system marks as safe but that the worker considers to be offensive. For a given round, workers earn a “game” point each time they are able to “beat the system," or in other words, trick the model by submitting offensive messages that the model marks as safe. Workers earn up to 5 points each round, and have two tries for each point: we allow multiple attempts per point so that workers can get feedback from the models and better understand their weaknesses. The points serve to indicate success to the crowdworker and motivate to achieve high scores, but have no other meaning (e.g. no monetary value as in BIBREF25). More details regarding the user interface and instructions can be found in Appendix SECREF9.
<<</Crowderworker Task>>>
<<<Models to Break>>>
During round 1, workers try to break the baseline model $A_0$, trained on Wikipedia Toxic Comments. For rounds $i$, $i > 1$, workers must break both the baseline model and the model from the previous “fix it" round, which we refer to as $A_{i-1}$. In that case, the worker must submit messages that both $A_0$ and $A_{i-1}$ mark as safe but which the worker considers to be offensive.
<<</Models to Break>>>
<<</Break it Details>>>
<<<Fix it Details>>>
During the “fix it" round, we update the models with the newly collected adversarial data from the “break it" round.
The training data consists of all previous rounds of data, so that model $A_i$ is trained on all rounds $n$ for $n \le i$, as well as the Wikipedia Toxic Comments data. We split each round of data into train, validation, and test partitions. The validation set is used for hyperparameter selection. The test sets are used to measure how robust we are to new adversarial attacks. With increasing round $i$, $A_i$ should become more robust to increasingly complex human adversarial attacks.
<<</Fix it Details>>>
<<</Build it Break it Fix it Method>>>
<<<Single-Turn Task>>>
We first consider a single-turn set-up, i.e. detection of offensive language in one utterance, with no dialogue context or conversational history.
<<<Data Collection>>>
<<<Adversarial Collection>>>
We collected three rounds of data with the build it, break it, fix it algorithm described in the previous section. Each round of data consisted of 1000 examples, leading to 3000 single-turn adversarial examples in total. For the remainder of the paper, we refer to this method of data collection as the adversarial method.
<<</Adversarial Collection>>>
<<<Standard Collection>>>
In addition to the adversarial method, we also collected data in a non-adversarial manner in order to directly compare the two set-ups. In this method – which we refer to as the standard method, we simply ask crowdworkers to submit messages that they consider to be offensive. There is no model to break. Instructions are otherwise the same.
In this set-up, there is no real notion of “rounds", but for the sake of comparison we refer to each subsequent 1000 examples collected in this manner as a “round". We collect 3000 examples – or three rounds of data. We refer to a model trained on rounds $n \le i$ of the standard data as $S_i$.
<<</Standard Collection>>>
<<<Task Formulation Details>>>
Since all of the collected examples are labeled as offensive, to make this task a binary classification problem, we will also add safe examples to it.
The “safe data" is comprised of utterances from the ConvAI2 chit-chat task BIBREF29, BIBREF30 which consists of pairs of humans getting to know each other by discussing their interests. Each utterance we used was reviewed by two independent crowdworkers and labeled as safe, with the same characterization of safe as described before.
For each partition (train, validation, test), the final task has a ratio of 9:1 safe to offensive examples, mimicking the division of the Wikipedia Toxic Comments dataset used for training our baseline models. Dataset statistics for the final task can be found in Table TABREF21. We refer to these tasks – with both safe and offensive examples – as the adversarial and standard tasks.
<<</Task Formulation Details>>>
<<<Model Training Details>>>
Using the BERT-based model architecture described in Section SECREF3, we trained models on each round of the standard and adversarial tasks, multi-tasking with the Wikipedia Toxic Comments task. We weight the multi-tasking with a mixing parameter which is also tuned on the validation set. Finally, after training weights with the cross entropy loss, we adjust the final bias also using the validation set. We optimize for the sensitive class (i.e. offensive-class) F1 metric on the standard and adversarial validation sets respectively.
For each task (standard and adversarial), on round $i$, we train on data from all rounds $n$ for $n \le i$ and optimize for performance on the validation sets $n \le i$.
<<</Model Training Details>>>
<<</Data Collection>>>
<<<Experimental Results>>>
We conduct experiments comparing the adversarial and standard methods. We break down the results into “break it" results comparing the data collected and “fix it" results comparing the models obtained.
<<<Break it Phase>>>
Examples obtained from both the adversarial and standard collection methods were found to be clearly offensive, but we note several differences in the distribution of examples from each task, shown in Table TABREF21. First, examples from the standard task tend to contain more profanity. Using a list of common English obscenities and otherwise bad words, in Table TABREF21 we calculate the percentage of examples in each task containing such obscenities, and see that the standard examples contain at least seven times as many as each round of the adversarial task. Additionally, in previous works, authors have observed that classifiers struggle with negations BIBREF8. This is borne out by our data: examples from the single-turn adversarial task more often contain the token “not" than examples from the standard task, indicating that users are easily able to fool the classifier with negations.
We also anecdotally see figurative language such as “snakes hiding in the grass” in the adversarial data, which contain no individually offensive words, the offensive nature is captured by reading the entire sentence. Other examples require sophisticated world knowledge such as that many cultures consider eating cats to be offensive. To quantify these differences, we performed a blind human annotation of a sample of the data, 100 examples of standard and 100 examples of adversarial round 1. Results are shown in Table TABREF16. Adversarial data was indeed found to contain less profanity, fewer non-profane but offending words (such as “idiot”), more figurative language, and to require more world knowledge.
We note that, as anticipated, the task becomes more challenging for the crowdworkers with each round, indicated by the decreasing average scores in Table TABREF27. In round 1, workers are able to get past $A_0$ most of the time – earning an average score of $4.56$ out of 5 points per round – showcasing how susceptible this baseline is to adversarial attack despite its relatively strong performance on the Wikipedia Toxic Comments task. By round 3, however, workers struggle to trick the system, earning an average score of only $1.6$ out of 5. A finer-grained assessment of the worker scores can be found in Table TABREF38 in the appendix.
<<</Break it Phase>>>
<<<Fix it Phase>>>
Results comparing the performance of models trained on the adversarial ($A_i$) and standard ($S_i$) tasks are summarized in Table TABREF22, with further results in Table TABREF41 in Appendix SECREF40. The adversarially trained models $A_i$ prove to be more robust to adversarial attack: on each round of adversarial testing they outperform standard models $S_i$.
Further, note that the adversarial task becomes harder with each subsequent round. In particular, the performance of the standard models $S_i$ rapidly deteriorates between round 1 and round 2 of the adversarial task. This is a clear indication that models need to train on adversarially-collected data to be robust to adversarial behavior.
Standard models ($S_i$), trained on the standard data, tend to perform similarly to the adversarial models ($A_i$) as measured on the standard test sets, with the exception of training round 3, in which $A_3$ fails to improve on this task, likely due to being too optimized for adversarial tasks. The standard models $S_i$, on the other hand, are improving with subsequent rounds as they have more training data of the same distribution as the evaluation set. Similarly, our baseline model performs best on its own test set, but other models are not far behind.
Finally, we remark that all scores of 0 in Table TABREF22 are by design, as for round $i$ of the adversarial task, both $A_0$ and $A_{i-1}$ classified each example as safe during the `break it' data collection phase.
<<</Fix it Phase>>>
<<</Experimental Results>>>
<<</Single-Turn Task>>>
<<<Multi-Turn Task>>>
In most real-world applications, we find that adversarial behavior occurs in context – whether it is in the context of a one-on-one conversation, a comment thread, or even an image. In this work we focus on offensive utterances within the context of two-person dialogues. For dialogue safety we posit it is important to move beyond classifying single utterances, as it may be the case that an utterance is entirely innocuous on its own but extremely offensive in the context of the previous dialogue history. For instance, “Yes, you should definitely do it!" is a rather inoffensive message by itself, but most would agree that it is a hurtful response to the question “Should I hurt myself?"
<<<Task Implementation>>>
To this end, we collect data by asking crowdworkers to try to “beat" our best single-turn classifier (using the model that performed best on rounds 1-3 of the adversarial task, i.e., $A_3$), in addition to our baseline classifier $A_0$. The workers are shown truncated pieces of a conversation from the ConvAI2 chit-chat task, and asked to continue the conversation with offensive responses that our classifier marks as safe. As before, workers have two attempts per conversation to try to get past the classifier and are shown five conversations per round. They are given a score (out of five) at the end of each round indicating the number of times they successfully fooled the classifier.
We collected 3000 offensive examples in this manner. As in the single-turn set up, we combine this data with safe examples with a ratio of 9:1 safe to offensive for classifier training. The safe examples are dialogue examples from ConvAI2 for which the responses were reviewed by two independent crowdworkers and labeled as safe, as in the s single-turn task set-up. We refer to this overall task as the multi-turn adversarial task. Dataset statistics are given in Table TABREF30.
<<</Task Implementation>>>
<<</Multi-Turn Task>>>
<<<Conclusion>>>
We have presented an approach to build more robust offensive language detection systems in the context of a dialogue. We proposed a build it, break it, fix it, and then repeat strategy, whereby humans attempt to break the models we built, and we use the broken examples to fix the models. We show this results in far more nuanced language than in existing datasets. The adversarial data includes less profanity, which existing classifiers can pick up on, and is instead offensive due to figurative language, negation, and by requiring more world knowledge, which all make current classifiers fail. Similarly, offensive language in the context of a dialogue is also more nuanced than stand-alone offensive utterances. We show that classifiers that learn from these more complex examples are indeed more robust to attack, and that using the dialogue context gives improved performance if the model architecture takes it into account.
In this work we considered a binary problem (offensive or safe). Future work could consider classes of offensive language separately BIBREF13, or explore other dialogue tasks, e.g. from social media or forums. Another interesting direction is to explore how our build it, break it, fix it strategy would similarly apply to make neural generative models safe BIBREF31.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Conclusion, Multi-Turn Task"
],
"type": "disordered_section"
}
|
1908.06083
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Build it Break it Fix it for Dialogue Safety: Robustness from Adversarial Human Attack
<<<Abstract>>>
The detection of offensive language in the context of a dialogue has become an increasingly important application of natural language processing. The detection of trolls in public forums (Galan-Garcia et al., 2016), and the deployment of chatbots in the public domain (Wolf et al., 2017) are two examples that show the necessity of guarding against adversarially offensive behavior on the part of humans. In this work, we develop a training scheme for a model to become robust to such human attacks by an iterative build it, break it, fix it strategy with humans and models in the loop. In detailed experiments we show this approach is considerably more robust than previous systems. Further, we show that offensive language used within a conversation critically depends on the dialogue context, and cannot be viewed as a single sentence offensive detection task as in most previous work. Our newly collected tasks and methods will be made open source and publicly available.
<<</Abstract>>>
<<<Introduction>>>
The detection of offensive language has become an important topic as the online community has grown, as so too have the number of bad actors BIBREF2. Such behavior includes, but is not limited to, trolling in public discussion forums BIBREF3 and via social media BIBREF4, BIBREF5, employing hate speech that expresses prejudice against a particular group, or offensive language specifically targeting an individual. Such actions can be motivated to cause harm from which the bad actor derives enjoyment, despite negative consequences to others BIBREF6. As such, some bad actors go to great lengths to both avoid detection and to achieve their goals BIBREF7. In that context, any attempt to automatically detect this behavior can be expected to be adversarially attacked by looking for weaknesses in the detection system, which currently can easily be exploited as shown in BIBREF8, BIBREF9. A further example, relevant to the natural langauge processing community, is the exploitation of weaknesses in machine learning models that generate text, to force them to emit offensive language. Adversarial attacks on the Tay chatbot led to the developers shutting down the system BIBREF1.
In this work, we study the detection of offensive language in dialogue with models that are robust to adversarial attack. We develop an automatic approach to the “Build it Break it Fix it” strategy originally adopted for writing secure programs BIBREF10, and the “Build it Break it” approach consequently adapting it for NLP BIBREF11. In the latter work, two teams of researchers, “builders” and “breakers” were used to first create sentiment and semantic role-labeling systems and then construct examples that find their faults. In this work we instead fully automate such an approach using crowdworkers as the humans-in-the-loop, and also apply a fixing stage where models are retrained to improve them. Finally, we repeat the whole build, break, and fix sequence over a number of iterations.
We show that such an approach provides more and more robust systems over the fixing iterations. Analysis of the type of data collected in the iterations of the break it phase shows clear distribution changes, moving away from simple use of profanity and other obvious offensive words to utterances that require understanding of world knowledge, figurative language, and use of negation to detect if they are offensive or not. Further, data collected in the context of a dialogue rather than a sentence without context provides more sophisticated attacks. We show that model architectures that use the dialogue context efficiently perform much better than systems that do not, where the latter has been the main focus of existing research BIBREF12, BIBREF5, BIBREF13.
Code for our entire build it, break it, fix it algorithm will be made open source, complete with model training code and crowdsourcing interface for humans. Our data and trained models will also be made available for the community.
<<</Introduction>>>
<<<Related Work>>>
The task of detecting offensive language has been studied across a variety of content classes. Perhaps the most commonly studied class is hate speech, but work has also covered bullying, aggression, and toxic comments BIBREF13.
To this end, various datasets have been created to benchmark progress in the field. In hate speech detection, recently BIBREF5 compiled and released a dataset of over 24,000 tweets labeled as containing hate speech, offensive language, or neither. The TRAC shared task on Aggression Identification, a dataset of over 15,000 Facebook comments labeled with varying levels of aggression, was released as part of a competition BIBREF14. In order to benchmark toxic comment detection, The Wikipedia Toxic Comments dataset (which we study in this work) was collected and extracted from Wikipedia Talk pages and featured in a Kaggle competition BIBREF12, BIBREF15. Each of these benchmarks examine only single-turn utterances, outside of the context in which the language appeared. In this work we recommend that future systems should move beyond classification of singular utterances and use contextual information to help identify offensive language.
Many approaches have been taken to solve these tasks – from linear regression and SVMs to deep learning BIBREF16. The best performing systems in each of the competitions mentioned above (for aggression and toxic comment classification) used deep learning approaches such as LSTMs and CNNs BIBREF14, BIBREF15. In this work we consider a large-pretrained transformer model which has been shown to perform well on many downstream NLP tasks BIBREF17.
The broad class of adversarial training is currently a hot topic in machine learning BIBREF18. Use cases include training image generators BIBREF19 as well as image classifiers to be robust to adversarial examples BIBREF20. These methods find the breaking examples algorithmically, rather than by using humans breakers as we do. Applying the same approaches to NLP tends to be more challenging because, unlike for images, even small changes to a sentence can cause a large change in the meaning of that sentence, which a human can detect but a lower quality model cannot. Nevertheless algorithmic approaches have been attempted, for example in text classification BIBREF21, machine translation BIBREF22, dialogue generation tasks BIBREF23 and reading comprehension BIBREF24. The latter was particularly effective at proposing a more difficult version of the popular SQuAD dataset.
As mentioned in the introduction, our approach takes inspiration from “Build it Break it” approaches which have been successfully tried in other domains BIBREF10, BIBREF11. Those approaches advocate finding faults in systems by having humans look for insecurities (in software) or prediction failures (in models), but do not advocate an automated approach as we do here. Our work is also closely connected to the “Mechanical Turker Descent” algorithm detailed in BIBREF25 where language to action pairs were collected from crowdworkers by incentivizing them with a game-with-a-purpose technique: a crowdworker receives a bonus if their contribution results in better models than another crowdworker. We did not gamify our approach in this way, but still our approach has commonalities in the round-based improvement of models through crowdworker interaction.
<<</Related Work>>>
<<<Baselines: Wikipedia Toxic Comments>>>
In this section we describe the publicly available data that we have used to bootstrap our build it break it fix it approach. We also compare our model choices with existing work and clarify the metrics chosen to report our results.
<<<Wikipedia Toxic Comments>>>
The Wikipedia Toxic Comments dataset (WTC) has been collected in a common effort from the Wikimedia Foundation and Jigsaw BIBREF12 to identify personal attacks online. The data has been extracted from the Wikipedia Talk pages, discussion pages where editors can discuss improvements to articles or other Wikipedia pages. We considered the version of the dataset that corresponds to the Kaggle competition: “Toxic Comment Classification Challenge" BIBREF15 which features 7 classes of toxicity: toxic, severe toxic, obscene, threat, insult, identity hate and non-toxic. In the same way as in BIBREF26, every label except non-toxic is grouped into a class offensive while the non-toxic class is kept as the safe class. In order to compare our results to BIBREF26, we similarly split this dataset to dedicate 10% as a test set. 80% are dedicated to train set while the remaining 10% is used for validation. Statistics on the dataset are shown in Table TABREF4.
<<</Wikipedia Toxic Comments>>>
<<<Models>>>
We establish baselines using two models. The first one is a binary classifier built on top of a large pre-trained transformer model. We use the same architecture as in BERT BIBREF17. We add a linear layer to the output of the first token ([CLS]) to produce a final binary classification. We initialize the model using the weights provided by BIBREF17 corresponding to “BERT-base". The transformer is composed of 12 layers with hidden size of 768 and 12 attention heads. We fine-tune the whole network on the classification task. We also compare it the fastText classifier BIBREF27 for which a given sentence is encoded as the average of individual word vectors that are pre-trained on a large corpus issued from Wikipedia. A linear layer is then applied on top to yield a binary classification.
<<</Models>>>
<<<Experiments>>>
We compare the two aforementioned models with BIBREF26 who conducted their experiments with a BiLSTM with GloVe pre-trained word vectors BIBREF28. Results are listed in Table TABREF5 and we compare them using the weighted-F1, i.e. the sum of F1 score of each class weighted by their frequency in the dataset. We also report the F1 of the offensive-class which is the metric we favor within this work, although we report both. (Note that throughout the paper, the notation F1 is always referring to offensive-class F1.) Indeed, in the case of an imbalanced dataset such as Wikipedia Toxic Comments where most samples are safe, the weighted-F1 is closer to the F1 score of the safe class while we focus on detecting offensive content. Our BERT-based model outperforms the method from BIBREF26; throughout the rest of the paper, we use the BERT-based architecture in our experiments. In particular, we used this baseline trained on WTC to bootstrap our approach, to be described subsequently.
<<</Experiments>>>
<<</Baselines: Wikipedia Toxic Comments>>>
<<<Build it Break it Fix it Method>>>
In order to train models that are robust to adversarial behavior, we posit that it is crucial collect and train on data that was collected in an adversarial manner. We propose the following automated build it, break it, fix it algorithm:
Build it: Build a model capable of detecting offensive messages. This is our best-performing BERT-based model trained on the Wikipedia Toxic Comments dataset described in the previous section. We refer to this model throughout as $A_0$.
Break it: Ask crowdworkers to try to “beat the system" by submitting messages that our system ($A_0$) marks as safe but that the worker considers to be offensive.
Fix it: Train a new model on these collected examples in order to be more robust to these adversarial attacks.
Repeat: Repeat, deploying the newly trained model in the break it phase, then fix it again.
See Figure FIGREF6 for a visualization of this process.
<<<Break it Details>>>
<<<Definition of offensive>>>
Throughout data collection, we characterize offensive messages for users as messages that would not be “ok to send in a friendly conversation with someone you just met online." We use this specific language in an attempt to capture various classes of content that would be considered unacceptable in a friendly conversation, without imposing our own definitions of what that means. The phrase “with someone you just met online" was meant to mimic the setting of a public forum.
<<</Definition of offensive>>>
<<<Crowderworker Task>>>
We ask crowdworkers to try to “beat the system" by submitting messages that our system marks as safe but that the worker considers to be offensive. For a given round, workers earn a “game” point each time they are able to “beat the system," or in other words, trick the model by submitting offensive messages that the model marks as safe. Workers earn up to 5 points each round, and have two tries for each point: we allow multiple attempts per point so that workers can get feedback from the models and better understand their weaknesses. The points serve to indicate success to the crowdworker and motivate to achieve high scores, but have no other meaning (e.g. no monetary value as in BIBREF25). More details regarding the user interface and instructions can be found in Appendix SECREF9.
<<</Crowderworker Task>>>
<<<Models to Break>>>
During round 1, workers try to break the baseline model $A_0$, trained on Wikipedia Toxic Comments. For rounds $i$, $i > 1$, workers must break both the baseline model and the model from the previous “fix it" round, which we refer to as $A_{i-1}$. In that case, the worker must submit messages that both $A_0$ and $A_{i-1}$ mark as safe but which the worker considers to be offensive.
<<</Models to Break>>>
<<</Break it Details>>>
<<<Fix it Details>>>
During the “fix it" round, we update the models with the newly collected adversarial data from the “break it" round.
The training data consists of all previous rounds of data, so that model $A_i$ is trained on all rounds $n$ for $n \le i$, as well as the Wikipedia Toxic Comments data. We split each round of data into train, validation, and test partitions. The validation set is used for hyperparameter selection. The test sets are used to measure how robust we are to new adversarial attacks. With increasing round $i$, $A_i$ should become more robust to increasingly complex human adversarial attacks.
<<</Fix it Details>>>
<<</Build it Break it Fix it Method>>>
<<<Single-Turn Task>>>
We first consider a single-turn set-up, i.e. detection of offensive language in one utterance, with no dialogue context or conversational history.
<<<Data Collection>>>
<<<Adversarial Collection>>>
We collected three rounds of data with the build it, break it, fix it algorithm described in the previous section. Each round of data consisted of 1000 examples, leading to 3000 single-turn adversarial examples in total. For the remainder of the paper, we refer to this method of data collection as the adversarial method.
<<</Adversarial Collection>>>
<<<Standard Collection>>>
In addition to the adversarial method, we also collected data in a non-adversarial manner in order to directly compare the two set-ups. In this method – which we refer to as the standard method, we simply ask crowdworkers to submit messages that they consider to be offensive. There is no model to break. Instructions are otherwise the same.
In this set-up, there is no real notion of “rounds", but for the sake of comparison we refer to each subsequent 1000 examples collected in this manner as a “round". We collect 3000 examples – or three rounds of data. We refer to a model trained on rounds $n \le i$ of the standard data as $S_i$.
<<</Standard Collection>>>
<<<Task Formulation Details>>>
Since all of the collected examples are labeled as offensive, to make this task a binary classification problem, we will also add safe examples to it.
The “safe data" is comprised of utterances from the ConvAI2 chit-chat task BIBREF29, BIBREF30 which consists of pairs of humans getting to know each other by discussing their interests. Each utterance we used was reviewed by two independent crowdworkers and labeled as safe, with the same characterization of safe as described before.
For each partition (train, validation, test), the final task has a ratio of 9:1 safe to offensive examples, mimicking the division of the Wikipedia Toxic Comments dataset used for training our baseline models. Dataset statistics for the final task can be found in Table TABREF21. We refer to these tasks – with both safe and offensive examples – as the adversarial and standard tasks.
<<</Task Formulation Details>>>
<<<Model Training Details>>>
Using the BERT-based model architecture described in Section SECREF3, we trained models on each round of the standard and adversarial tasks, multi-tasking with the Wikipedia Toxic Comments task. We weight the multi-tasking with a mixing parameter which is also tuned on the validation set. Finally, after training weights with the cross entropy loss, we adjust the final bias also using the validation set. We optimize for the sensitive class (i.e. offensive-class) F1 metric on the standard and adversarial validation sets respectively.
For each task (standard and adversarial), on round $i$, we train on data from all rounds $n$ for $n \le i$ and optimize for performance on the validation sets $n \le i$.
<<</Model Training Details>>>
<<</Data Collection>>>
<<<Experimental Results>>>
We conduct experiments comparing the adversarial and standard methods. We break down the results into “break it" results comparing the data collected and “fix it" results comparing the models obtained.
<<<Break it Phase>>>
Examples obtained from both the adversarial and standard collection methods were found to be clearly offensive, but we note several differences in the distribution of examples from each task, shown in Table TABREF21. First, examples from the standard task tend to contain more profanity. Using a list of common English obscenities and otherwise bad words, in Table TABREF21 we calculate the percentage of examples in each task containing such obscenities, and see that the standard examples contain at least seven times as many as each round of the adversarial task. Additionally, in previous works, authors have observed that classifiers struggle with negations BIBREF8. This is borne out by our data: examples from the single-turn adversarial task more often contain the token “not" than examples from the standard task, indicating that users are easily able to fool the classifier with negations.
We also anecdotally see figurative language such as “snakes hiding in the grass” in the adversarial data, which contain no individually offensive words, the offensive nature is captured by reading the entire sentence. Other examples require sophisticated world knowledge such as that many cultures consider eating cats to be offensive. To quantify these differences, we performed a blind human annotation of a sample of the data, 100 examples of standard and 100 examples of adversarial round 1. Results are shown in Table TABREF16. Adversarial data was indeed found to contain less profanity, fewer non-profane but offending words (such as “idiot”), more figurative language, and to require more world knowledge.
We note that, as anticipated, the task becomes more challenging for the crowdworkers with each round, indicated by the decreasing average scores in Table TABREF27. In round 1, workers are able to get past $A_0$ most of the time – earning an average score of $4.56$ out of 5 points per round – showcasing how susceptible this baseline is to adversarial attack despite its relatively strong performance on the Wikipedia Toxic Comments task. By round 3, however, workers struggle to trick the system, earning an average score of only $1.6$ out of 5. A finer-grained assessment of the worker scores can be found in Table TABREF38 in the appendix.
<<</Break it Phase>>>
<<<Fix it Phase>>>
Results comparing the performance of models trained on the adversarial ($A_i$) and standard ($S_i$) tasks are summarized in Table TABREF22, with further results in Table TABREF41 in Appendix SECREF40. The adversarially trained models $A_i$ prove to be more robust to adversarial attack: on each round of adversarial testing they outperform standard models $S_i$.
Further, note that the adversarial task becomes harder with each subsequent round. In particular, the performance of the standard models $S_i$ rapidly deteriorates between round 1 and round 2 of the adversarial task. This is a clear indication that models need to train on adversarially-collected data to be robust to adversarial behavior.
Standard models ($S_i$), trained on the standard data, tend to perform similarly to the adversarial models ($A_i$) as measured on the standard test sets, with the exception of training round 3, in which $A_3$ fails to improve on this task, likely due to being too optimized for adversarial tasks. The standard models $S_i$, on the other hand, are improving with subsequent rounds as they have more training data of the same distribution as the evaluation set. Similarly, our baseline model performs best on its own test set, but other models are not far behind.
Finally, we remark that all scores of 0 in Table TABREF22 are by design, as for round $i$ of the adversarial task, both $A_0$ and $A_{i-1}$ classified each example as safe during the `break it' data collection phase.
<<</Fix it Phase>>>
<<</Experimental Results>>>
<<</Single-Turn Task>>>
<<<Multi-Turn Task>>>
In most real-world applications, we find that adversarial behavior occurs in context – whether it is in the context of a one-on-one conversation, a comment thread, or even an image. In this work we focus on offensive utterances within the context of two-person dialogues. For dialogue safety we posit it is important to move beyond classifying single utterances, as it may be the case that an utterance is entirely innocuous on its own but extremely offensive in the context of the previous dialogue history. For instance, “Yes, you should definitely do it!" is a rather inoffensive message by itself, but most would agree that it is a hurtful response to the question “Should I hurt myself?"
<<<Task Implementation>>>
To this end, we collect data by asking crowdworkers to try to “beat" our best single-turn classifier (using the model that performed best on rounds 1-3 of the adversarial task, i.e., $A_3$), in addition to our baseline classifier $A_0$. The workers are shown truncated pieces of a conversation from the ConvAI2 chit-chat task, and asked to continue the conversation with offensive responses that our classifier marks as safe. As before, workers have two attempts per conversation to try to get past the classifier and are shown five conversations per round. They are given a score (out of five) at the end of each round indicating the number of times they successfully fooled the classifier.
We collected 3000 offensive examples in this manner. As in the single-turn set up, we combine this data with safe examples with a ratio of 9:1 safe to offensive for classifier training. The safe examples are dialogue examples from ConvAI2 for which the responses were reviewed by two independent crowdworkers and labeled as safe, as in the s single-turn task set-up. We refer to this overall task as the multi-turn adversarial task. Dataset statistics are given in Table TABREF30.
<<</Task Implementation>>>
<<</Multi-Turn Task>>>
<<<Conclusion>>>
We have presented an approach to build more robust offensive language detection systems in the context of a dialogue. We proposed a build it, break it, fix it, and then repeat strategy, whereby humans attempt to break the models we built, and we use the broken examples to fix the models. We show this results in far more nuanced language than in existing datasets. The adversarial data includes less profanity, which existing classifiers can pick up on, and is instead offensive due to figurative language, negation, and by requiring more world knowledge, which all make current classifiers fail. Similarly, offensive language in the context of a dialogue is also more nuanced than stand-alone offensive utterances. We show that classifiers that learn from these more complex examples are indeed more robust to attack, and that using the dialogue context gives improved performance if the model architecture takes it into account.
In this work we considered a binary problem (offensive or safe). Future work could consider classes of offensive language separately BIBREF13, or explore other dialogue tasks, e.g. from social media or forums. Another interesting direction is to explore how our build it, break it, fix it strategy would similarly apply to make neural generative models safe BIBREF31.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Build it Break it Fix it Method, Introduction"
],
"type": "disordered_section"
}
|
1911.05153
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Improving Robustness of Task Oriented Dialog Systems
<<<Abstract>>>
Task oriented language understanding in dialog systems is often modeled using intents (task of a query) and slots (parameters for that task). Intent detection and slot tagging are, in turn, modeled using sentence classification and word tagging techniques respectively. Similar to adversarial attack problems with computer vision models discussed in existing literature, these intent-slot tagging models are often over-sensitive to small variations in input -- predicting different and often incorrect labels when small changes are made to a query, thus reducing their accuracy and reliability. However, evaluating a model's robustness to these changes is harder for language since words are discrete and an automated change (e.g. adding `noise') to a query sometimes changes the meaning and thus labels of a query. In this paper, we first describe how to create an adversarial test set to measure the robustness of these models. Furthermore, we introduce and adapt adversarial training methods as well as data augmentation using back-translation to mitigate these issues. Our experiments show that both techniques improve the robustness of the system substantially and can be combined to yield the best results.
<<</Abstract>>>
<<<Introduction>>>
In computer vision, it is well known that otherwise competitive models can be "fooled" by adding intentional noise to the input images BIBREF0, BIBREF1. Such changes, imperceptible to the human eye, can cause the model to reverse its initial correct decision on the original input. This has also been studied for Automatic Speech Recognition (ASR) by including hidden commands BIBREF2 in the voice input. Devising such adversarial examples for machine learning algorithms, in particular for neural networks, along with defense mechanisms against them, has been of recent interest BIBREF3. The lack of smoothness of the decision boundaries BIBREF4 and reliance on weakly correlated features that do not generalize BIBREF5 seem to be the main reasons for confident but incorrect predictions for instances that are far from the training data manifold. Among the most successful techniques to increase resistance to such attacks is perturbing the training data and enforcing the output to remain the same BIBREF4, BIBREF6. This is expected to improve the smoothing of the decision boundaries close to the training data but may not help with points that are far from them.
There has been recent interest in studying this adversarial attack phenomenon for natural language processing tasks, but that is harder than vision problems for at least two reasons: 1) textual input is discrete, and 2) adding noise may completely change a sentence's meaning or even make it meaningless. Although there are various works that devise adversarial examples in the NLP domain, defense mechanisms have been rare. BIBREF7 applied perturbation to the continuous word embeddings instead of the discrete tokens. This has been shown BIBREF8 to act as a regularizer that increases the model performance on the clean dataset but the perturbed inputs are not true adversarial examples, as they do not correspond to any input text and it cannot be tested whether they are perceptible to humans or not.
Unrestricted adversarial examples BIBREF9 lift the constraint on the size of added perturbation and as such can be harder to defend against. Recently, Generative Adversarial Networks (GANs) alongside an auxiliary classifier have been proposed to generate adversarial examples for each label class. In the context of natural languages, use of seq2seq models BIBREF10 seems to be a natural way of perturbing an input example BIBREF11. Such perturbations, that practically paraphrase the original sentence, lie somewhere between the two methods described above. On one hand, the decoder is not constrained to be in a norm ball from the input and, on the other hand, the output is strongly conditioned on the input and hence, not unrestricted.
Current NLP work on input perturbations and defense against them has mainly focused on sentence classification. In this paper, we examine a harder task: joint intent detection (sentence classification) and slot tagging (sequence word tagging) for task oriented dialog, which has been of recent interest BIBREF12 due to the ubiquity of commercial conversational AI systems.
In the task and data described in Section SECREF2, we observe that exchanging a word with its synonym, as well as changing the structural order of a query can flip the model prediction. Table TABREF1 shows a few such sentence pairs for which the model prediction is different. Motivated by this, in this paper, we focus on analyzing the model robustness against two types of untargeted (that is, we do not target a particular perturbed label) perturbations: paraphrasing and random noise. In order to evaluate the defense mechanisms, we discuss how one can create an adversarial test set focusing on these two types of perturbations in the setting of joint sentence classification and sequence word tagging.
Our contributions are: 1. Analyzing the robustness of the joint task of sentence classification and sequence word tagging through generating diverse untargeted adversarial examples using back-translation and noisy autoencoder, and 2. Two techniques to improve upon a model's robustness – data augmentation using back-translation, and adversarial logit pairing loss. Data augmentation using back-translation was earlier proposed as a defense mechanism for a sentence classification task BIBREF11; we extend it to sequence word tagging. We investigate using different types of machine translation systems, as well as different auxiliary languages, for both test set generation and data augmentation. Logit pairing was proposed for improving the robustness in the image classification setting with norm ball attacks BIBREF6; we extend it to the NLP context. We show that combining the two techniques gives the best results.
<<</Introduction>>>
<<<Task and Data>>>
block = [text width=15em, text centered]
In conversational AI, the language understanding task typically consists of classifying the intent of a sentence and tagging the corresponding slots. For example, a query like What's the weather in Sydney today could be annotated as a weather/find intent, with Sydney and today being location and datetime slots, respectively. This predicted intent then informs which API to call to answer the query and the predicted slots inform the arguments for the call. See Fig. FIGREF2. Slot tagging is arguably harder compared to intent classification since the spans need to align as well.
We use the data provided by BIBREF13, which consists of task-oriented queries in weather and alarm domains. The data contains 25k training, 3k evaluation and 7k test queries with 11 intents and 7 slots. We conflate and use a common set of labels for the two domains. Since there is no ambiguous slot or intent in the domains, unlike BIBREF14, we do not need to train a domain classifier, neither jointly nor at the beginning of the pipeline. If a query is not supported by the system but it is unambiguously part of the alarm or weather domains, they are marked as alarm/unsupported and weather/unsupported respectively.
<<</Task and Data>>>
<<<Robustness Evaluation>>>
To evaluate model robustness, we devise a test set consisting of ‘adversarial’ examples, i.e, perturbed examples that can potentially change the base model's prediction. These could stem from paraphrasing a sentence, e.g., lexical and syntactical changes. We use two approaches described in literature: back-translation and noisy sequence autoencoder. Note that these examples resemble black-box attacks but are not intentionally designed to fool the system and hence, we use the term 'adversarial' broadly. We use these techniques to produce many paraphrases and find a subset of utterances that though very similar to the original test set, result in wrong predictions. We will measure the model robustness against such changes.
Also note that to make the test set hard, we select only the examples for which the model prediction is different for the paraphrased sentence compared to the original sentence. We, however, do not use the original annotation for the perturbed sentences – instead, we re-annotate the sentences manually. We explain the motivation and methodology for manual annotation later in this section.
<<<Automatically Generating Examples>>>
We describe two methods of devising untargeted (not targeted towards a particular label) paraphrase generation to find a subset that dramatically reduce the accuracy of the model mentioned in the previous section. We follow BIBREF11 and BIBREF15 to generate the potential set of sentences.
<<<Back-translation>>>
Back-translation is a common technique in Machine Translation (MT) to improve translation performance, especially for low-resource language pairs BIBREF16, BIBREF17, BIBREF18. In back-translation, a MT system is used to translate the original sentences to an auxiliary language and a reverse MT system translates them back into the original language. At the final decoding phase, the top k beams are the variations of the original sentence. See Fig. FIGREF5. BIBREF11 which showed the effectiveness of simple back-translation in quickly generating adversarial paraphrases and breaking the correctly predicted examples.
To increase diversity, we use two different MT systems and two different auxiliary languages - Czech (cs) and Spanish (es), to use with our training data in English (en). We use the Nematus BIBREF19 pre-trained cs-en model, which was also used in BIBREF11, as well as the FB internal MT system with pre-trained models for cs-en and es-en language pairs.
<<</Back-translation>>>
<<<Noisy Sequence Autoencoder>>>
Following BIBREF15, we train a sequence autoencoder BIBREF20 using all the training data. At test time, we add noise to the last hidden state of the encoder, which is used to decode a variation. We found that not using attention results in more diverse examples, by giving the model more freedom to stray from the original sentence. We again decode the top k beams as variations to the original sentence. We observed that the seq2seq model results in less meaningful sentences than using the MT systems, which have been trained over millions of sentences.
<<</Noisy Sequence Autoencoder>>>
<<</Automatically Generating Examples>>>
<<<Annotation>>>
For each of the above methods, we use the original test data and generate paraphrases using k=5 beams. We remove the beams that are the same as the original sentence after lower-casing. In order to make sure we have a high-quality adversarial test set, we need to manually check the model's prediction on the above automatically-generated datasets. Unlike targeted methods to procure adversarial examples, our datasets have been generated by random perturbations in the original sentences. Hence, we expect that the true adversarial examples would be quite sparse. In order to obviate the need for manual annotation of a large dataset to find these sparse examples, we sample only from the paraphrases for which the predicted intent is different from the original sentence's predicted intent. This significantly increases the chance of encountering an adversarial example. Note that the model accuracy on this test set might not be zero for two reasons: 1) the flipped intent might actually be justified and not a mistake. For example, “Cancel the alarm” and “Pause the alarm” may be considered as paraphrases, but in the dataset they correspond to alarm/cancel and alarm/pause intents, respectively, and 2) the model might have been making an error in the original prediction, which was corrected by the paraphrase. (However, upon manual observation, this rarely happens).
The other reason that we need manual annotation is that such unrestricted generation may result in new variations that can be meaningless or ambiguous without any context. Note that if the meaning can be easily inferred, we do not count slight grammatical errors as meaningless. Thus, we manually double annotate the sentences with flipped intent classification where the disagreements are resolved by a third annotator. As a part of this manual annotation, we also remove the meaningless and ambiguous sentences. Note that these adversarial examples are untargeted, i.e., we had no control in which new label a perturbed example would be sent to.
<<</Annotation>>>
<<<Analysis>>>
We have shown adversarial examples from different sources alongside their original sentence in Table TABREF3. We observe that some patterns, such as addition of a definite article or gerund appear more often in the es test set which perhaps stems from the properties of the Spanish language (i.e., most nouns have an article and present simple/continuous tense are often interchangeable). On the other hand, there is more verbal diversity in the cs test set which may be because of the linguistic distance of Czech from English compared with Spanish. Moreover, we observe many imperative-to-declarative transformation in all the back-translated examples. Finally, the seq2seq examples seem to have a higher degree of freedom but that can tip them off into the meaningless realm more often too.
<<</Analysis>>>
<<</Robustness Evaluation>>>
<<<Base Model>>>
A commonly used architecture for the task described in Section SECREF2 is a bidirectional LSTM for the sentence representation with separate projection layers for sentence (intent) classification and sequence word (slot) tagging BIBREF21, BIBREF22, BIBREF12, BIBREF14. In order to evaluate the model in a task oriented setting, exact match accuracy (from now on, accuracy) is of paramount importance. This is defined as the percentage of the sentences for which the intent and all the slots have been correctly tagged.
We use two biLSTM layers of size 200 and two feed-forward layers for the intents and the slots. We use dropout of $0.3$ and train the model for 20 epochs with learning rate of $0.01$ and weight decay of $0.001$. This model, our baseline, achieves $87.1\%$ accuracy over the test set.
The performance of the base model described in the previous section is shown in the first row of Table TABREF8 for the Nematus cs-en ($\bar{cs}$), FB MT system cs-en (cs) and es-en (es), sequence autoencoder (seq2seq), and the average of the adversarial sets (avg). We also included the results for the ensemble model, which combines the decisions of five separate baseline models that differ in batch order, initialization, and dropout masking. We can see that, similar to the case in computer vision BIBREF4, the adversarial examples seem to stem from fundamental properties of the neural networks and ensembling helps only a little.
<<</Base Model>>>
<<<Approaches to Improve Robustness>>>
In order to improve robustness of the base model against paraphrases and random noise, we propose two approaches: data augmentation and model smoothing via adversarial logit pairing. Data augmentation generates and adds training data without manual annotation. This would help the model see variations that it has not observed in the original training data. As discussed before, back-translation is one way to generate unlabeled data automatically. In this paper, we show how we can automatically generate labels for such sentences during training time and show that it improves the robustness of the model. Note that for our task we have to automatically label both sentence labels (intent) and word tags (slots) for such sentences.
The second method we propose is adding logit pairing loss. Unlike data augmentation, logit pairing treats the original and paraphrased sentence sets differently. As such, in addition to the cross-entropy loss over the original training data, we would have another loss term enforcing that the predictions for a sentence and its paraphrases are similar in the logit space. This would ensure that the model makes smooth decisions and prevent the model from making drastically different decisions with small perturbations.
<<<Data Augmentation>>>
We generate back-translated data from the training data using pre-trained FB MT system. We keep the top 5 beams after the back-translation and remove the beams that already exist in the training data after lower-casing. We observed that including the top 5 beams results in quite diverse combinations without hurting the readability of the sentences. In order to use the unlabeled data, we use an extended version of self training BIBREF23 in which the original classifier is used to annotate the unlabeled data. Unsurprisingly, self-training can result in reinforcing the model errors. Since the sentence intents usually remain the same after paraphrasing for each paraphrase, we annotate its intent as the intent of the original sentence. Since many slot texts may be altered or removed during back-translation, we use self-training to label the slots of the paraphrases. We train the model on the combined clean and noisy datasets with the loss function being the original loss plus the loss on back-translated data weighted by 0.1 for which the accuracy on the clean dev set is still negligible. The model seemed to be quite insensitive against this weight, though and the clean dev accuracy was hurt by less than 1 point using weighing the augmented data equally as the original data. The accuracy over the clean test set using the augmented training data having Czech (cs) and Spanish (es) as the auxiliary languages are shown in Table TABREF8.
We observe that, as expected, data augmentation improves accuracy on sentences generated using back-translation, however we see that it also improves accuracy on sentences generated using seq2seq autoencoder. We discuss the results in more detail in the next section.
<<</Data Augmentation>>>
<<<Model smoothing via Logit Pairing>>>
BIBREF6 perturb images with the attacks introduced by BIBREF3 and report state-of-the-art results by matching the logit distribution of the perturbed and original images instead of matching only the classifier decision. They also introduce clean pairing in which the logit pairing is applied to random data points in the clean training data, which yields surprisingly good results. Here, we modify both methods for the language understanding task, including sequence word tagging, and expand the approach to targeted pairing for increasing robustness against adversarial examples.
<<<Clean Logit Pairing>>>
Pairing random queries as proposed by BIBREF6 performed very poorly on our task. In the paper, we study the effects when we pair the sentences that have the same annotations, i.e., same intent and same slot labels. Consider a batch $M$, with $m$ clean sentences. For each tuple of intent and slot labels, we identify corresponding sentences in the batch, $M_k$ and sample pairs of sentences. We add a second cost function to the original cost function for the batch that enforces the logit vectors of the intent and same-label slots of those pairs of sentences to have similar distributions:
where $I^{(i)}$ and $S^{(i)}_s$ denote the logit vectors corresponding to the intent and $s^{th}$ slot of the $i^{th}$ sentence, respectively. Moreover, $P$ is the total number of sampled pairs, and $\lambda _{sf}$ is a hyper-parameter. We sum the above loss for all the unique tuples of labels and normalize by the total number of pairs. Throughout this section, we use MSE loss for the function $L()$. We train the model with the same parameters as in Section SECREF2, with the only difference being that we use learning rate of $0.001$ and train for 25 epochs to improve model convergence. Contrary to what we expected, clean logit pairing on its own reduces accuracy on both clean and adversarial test sets. Our hypothesis is that the logit smoothing resulted by this method prevents the model from using weakly correlated features BIBREF5, which could have helped the accuracy of both the clean and adversarial test sets.
<<</Clean Logit Pairing>>>
<<<Adversarial Logit Pairing (ALP)>>>
In order to make the model more robust to paraphrases, we pair a sentence with its back-translated paraphrases and impose the logit distributions to be similar. We generate the paraphrases using the FB MT system as in the previous section using es and cs as auxiliary languages. For the sentences $m^{(i)}$ inside the mini-batch and their paraphrase $\tilde{m}^{(i)}_k$, we add the following loss
where $P$ is the total number of original-paraphrase sentence pairs. Note that the first term, which pairs the logit vectors of the predicted intents of a sentence and its paraphrase, can be obtained in an unsupervised fashion. For the second term however, we need to know the position of the slots in the paraphrases in order to be matched with the original slots. We use self-training again to tag the slots in the paraphrased sentence. Then, we pair the logit vectors corresponding to the common labels found among the original and paraphrases slots left to right. We also find that adding a similar loss for pairs of paraphrases of the original sentence, i.e. matching the logit vectors corresponding to the intent and slots, can help the performance on the accuracy over the adversarial test sets. In Table TABREF8, we show the results using ALP (using both the original-paraphrase and paraphrase-paraphrase pairs) for $\lambda _a=0.01$.
<<</Adversarial Logit Pairing (ALP)>>>
<<</Model smoothing via Logit Pairing>>>
<<</Approaches to Improve Robustness>>>
<<<Results and Discussion>>>
We observe that data augmentation using back-translation improves the accuracy across all the adversarial sets, including the seq2seq test set. Unsurprisingly, the gains are the highest when augmenting the training data using the same MT system and the same auxiliary language that the adversarial test set was generated from. However, more interestingly, it is still effective for adversarial examples generated using a different auxiliary language or a different MT system (which, as discussed in the previous section, yielded different types of sentences) from that which was used at the training time. More importantly, even if the generation process is different altogether, that is, the seq2seq dataset generated by the noisy autoencoder, some of the gains are still transferred and the accuracy over the adversarial examples increases. We also train a model using the es and cs back-translated data combined. Table TABREF8 shows that this improves the average performance over the adversarial sets.
This suggests that in order to achieve robustness towards different types of paraphrasing, we would need to augment the training data using data generated with various techniques. But one can hope that some of the defense would be transferred for adversarial examples that come from unknown sources. Note that unlike the manually annotated test sets, the augmented training data contains noise both in the generation step (e.g. meaningless utterances) as well as in the automatic annotation step. But the model seems to be quite robust toward this random noise; its accuracy over the clean test set is almost unchanged while yielding nontrivial gains over the adversarial test sets.
We observe that ALP results in similarly competitive performance on the adversarial test sets as using the data augmentation but it has a more detrimental effect on the clean test set accuracy. We hypothesize that data augmentation helps with smoothing the decision boundaries without preventing the model from using weakly correlated features. Hence, the regression on the clean test set is very small. This is in contrast with adversarial defense mechanisms such as ALP BIBREF5 which makes the model regress much more on the clean test set.
We also combine ALP with the data augmentation technique that yields the highest accuracy on the adversarial test sets but incurs additional costs to the clean test set (more than three points compared with the base model). Adding clean logit pairing to the above resulted in the most defense transfer (i.e. accuracy on the seq2seq adversarial test set) but it is detrimental to almost all the other metrics. One possible explanation can be that the additional regularization stemming from the clean logit pairing helps with generalization (and hence, the transfer) from the back-translated augmented data to the seq2seq test set but it is not helpful otherwise.
<<</Results and Discussion>>>
<<<Related Work>>>
Adversarial examples BIBREF4 refer to intentionally devised inputs by an adversary which causes the model's accuracy to make highly-confident but erroneous predictions, e.g. Fast Gradient Sign Attack (FGSA) BIBREF4 and Projected gradient Descent (PGD) BIBREF3. In such methods, the constrained perturbation that (approximately) maximizes the loss for an original data point is added to it. In white-box attacks, the perturbations are chosen to maximize the model loss for the original inputs BIBREF4, BIBREF3, BIBREF24. Such attacks have shown to be transferable to other models which makes it possible to devise black-box attacks for a machine learning model by transferring from a known model BIBREF25, BIBREF1.
Defense against such examples has been an elusive task, with proposed mechanisms proving effective against only particular attacks BIBREF3, BIBREF26. Adversarial training BIBREF4 augments the training data with carefully picked perturbations during the training time, which is robust against normed-ball perturbations. But in the general setting of having unrestricted adversarial examples, these defenses have been shown to be highly ineffective BIBREF27.
BIBREF28 introduced white-box attacks for language by swapping one token for another based on the gradient of the input. BIBREF29 introduced an algorithm to generate adversarial examples for sentiment analysis and textual entailment by replacing words of the sentence with similar tokens that preserve the language model scoring and maximize the target class probability. BIBREF7 introduced one of the few defense mechanisms for NLP by extending adversarial training to this domain by perturbing the input embeddings and enforcing the label (distribution) to remain unchanged. BIBREF30 and BIBREF8 used this strategy as a regularization method for part-of-speech, relation extraction and NER tasks. Such perturbations resemble the normed-ball attacks for images but the perturbed input does not correspond to a real adversarial example. BIBREF11 studied two methods of generating adversarial data – back-translation and syntax-controlled sequence-to-sequence generation. They show that although the latter method is more effective in generating syntactically diverse examples, the former is also a fast and effective way of generating adversarial examples.
There has been a large body of literature on language understanding for task oriented dialog using the intent/slot framework. Bidirectional LSTM for the sentence representation alongside separate projection layers for intent and slot tagging is the typical architecture for the joint task BIBREF21, BIBREF22, BIBREF12, BIBREF14.
In parallel to the current work, BIBREF31 introduced unsupervised data augmentation for classification tasks by perturbing the training data and similar to BIBREF7 minimize the KL divergence between the predicted distributions on an unlabeled example and its perturbations. Their goal is to achieve high accuracy using as little labeled data as possible by leveraging the unlabeled data. In this paper, we have focused on increasing the model performance on adversarial test sets in supervised settings while constraining the degradation on the clean test set. Moreover, we focused on a more complicated task: the joint classification and sequence tagging task.
<<</Related Work>>>
<<<Conclusion>>>
In this paper, we study the robustness of language understanding models for the joint task of sentence classification and sequence word tagging in the field of task oriented dialog by generating adversarial test sets. We further discuss defense mechanisms using data augmentation and adversarial logit pairing loss.
We first generate adversarial test sets using two methods, back-translation with two languages and sequence auto-encoder, and observe that the two methods generate different types of sentences. Our experiments show that creating the test set using a combination of the two methods above is better than either method alone, based on the model's performance on the test sets. Secondly, we propose how to improve the model's robustness against such adversarial test sets by both augmenting the training data and using a new loss function based on logit pairing with back-translated paraphrases annotated using self-training. The experiments show that combining data augmentation using back-translation and adversarial logit pairing loss performs best on the adversarial test sets.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Related Work, Introduction"
],
"type": "disordered_section"
}
|
1911.05153
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Improving Robustness of Task Oriented Dialog Systems
<<<Abstract>>>
Task oriented language understanding in dialog systems is often modeled using intents (task of a query) and slots (parameters for that task). Intent detection and slot tagging are, in turn, modeled using sentence classification and word tagging techniques respectively. Similar to adversarial attack problems with computer vision models discussed in existing literature, these intent-slot tagging models are often over-sensitive to small variations in input -- predicting different and often incorrect labels when small changes are made to a query, thus reducing their accuracy and reliability. However, evaluating a model's robustness to these changes is harder for language since words are discrete and an automated change (e.g. adding `noise') to a query sometimes changes the meaning and thus labels of a query. In this paper, we first describe how to create an adversarial test set to measure the robustness of these models. Furthermore, we introduce and adapt adversarial training methods as well as data augmentation using back-translation to mitigate these issues. Our experiments show that both techniques improve the robustness of the system substantially and can be combined to yield the best results.
<<</Abstract>>>
<<<Introduction>>>
In computer vision, it is well known that otherwise competitive models can be "fooled" by adding intentional noise to the input images BIBREF0, BIBREF1. Such changes, imperceptible to the human eye, can cause the model to reverse its initial correct decision on the original input. This has also been studied for Automatic Speech Recognition (ASR) by including hidden commands BIBREF2 in the voice input. Devising such adversarial examples for machine learning algorithms, in particular for neural networks, along with defense mechanisms against them, has been of recent interest BIBREF3. The lack of smoothness of the decision boundaries BIBREF4 and reliance on weakly correlated features that do not generalize BIBREF5 seem to be the main reasons for confident but incorrect predictions for instances that are far from the training data manifold. Among the most successful techniques to increase resistance to such attacks is perturbing the training data and enforcing the output to remain the same BIBREF4, BIBREF6. This is expected to improve the smoothing of the decision boundaries close to the training data but may not help with points that are far from them.
There has been recent interest in studying this adversarial attack phenomenon for natural language processing tasks, but that is harder than vision problems for at least two reasons: 1) textual input is discrete, and 2) adding noise may completely change a sentence's meaning or even make it meaningless. Although there are various works that devise adversarial examples in the NLP domain, defense mechanisms have been rare. BIBREF7 applied perturbation to the continuous word embeddings instead of the discrete tokens. This has been shown BIBREF8 to act as a regularizer that increases the model performance on the clean dataset but the perturbed inputs are not true adversarial examples, as they do not correspond to any input text and it cannot be tested whether they are perceptible to humans or not.
Unrestricted adversarial examples BIBREF9 lift the constraint on the size of added perturbation and as such can be harder to defend against. Recently, Generative Adversarial Networks (GANs) alongside an auxiliary classifier have been proposed to generate adversarial examples for each label class. In the context of natural languages, use of seq2seq models BIBREF10 seems to be a natural way of perturbing an input example BIBREF11. Such perturbations, that practically paraphrase the original sentence, lie somewhere between the two methods described above. On one hand, the decoder is not constrained to be in a norm ball from the input and, on the other hand, the output is strongly conditioned on the input and hence, not unrestricted.
Current NLP work on input perturbations and defense against them has mainly focused on sentence classification. In this paper, we examine a harder task: joint intent detection (sentence classification) and slot tagging (sequence word tagging) for task oriented dialog, which has been of recent interest BIBREF12 due to the ubiquity of commercial conversational AI systems.
In the task and data described in Section SECREF2, we observe that exchanging a word with its synonym, as well as changing the structural order of a query can flip the model prediction. Table TABREF1 shows a few such sentence pairs for which the model prediction is different. Motivated by this, in this paper, we focus on analyzing the model robustness against two types of untargeted (that is, we do not target a particular perturbed label) perturbations: paraphrasing and random noise. In order to evaluate the defense mechanisms, we discuss how one can create an adversarial test set focusing on these two types of perturbations in the setting of joint sentence classification and sequence word tagging.
Our contributions are: 1. Analyzing the robustness of the joint task of sentence classification and sequence word tagging through generating diverse untargeted adversarial examples using back-translation and noisy autoencoder, and 2. Two techniques to improve upon a model's robustness – data augmentation using back-translation, and adversarial logit pairing loss. Data augmentation using back-translation was earlier proposed as a defense mechanism for a sentence classification task BIBREF11; we extend it to sequence word tagging. We investigate using different types of machine translation systems, as well as different auxiliary languages, for both test set generation and data augmentation. Logit pairing was proposed for improving the robustness in the image classification setting with norm ball attacks BIBREF6; we extend it to the NLP context. We show that combining the two techniques gives the best results.
<<</Introduction>>>
<<<Task and Data>>>
block = [text width=15em, text centered]
In conversational AI, the language understanding task typically consists of classifying the intent of a sentence and tagging the corresponding slots. For example, a query like What's the weather in Sydney today could be annotated as a weather/find intent, with Sydney and today being location and datetime slots, respectively. This predicted intent then informs which API to call to answer the query and the predicted slots inform the arguments for the call. See Fig. FIGREF2. Slot tagging is arguably harder compared to intent classification since the spans need to align as well.
We use the data provided by BIBREF13, which consists of task-oriented queries in weather and alarm domains. The data contains 25k training, 3k evaluation and 7k test queries with 11 intents and 7 slots. We conflate and use a common set of labels for the two domains. Since there is no ambiguous slot or intent in the domains, unlike BIBREF14, we do not need to train a domain classifier, neither jointly nor at the beginning of the pipeline. If a query is not supported by the system but it is unambiguously part of the alarm or weather domains, they are marked as alarm/unsupported and weather/unsupported respectively.
<<</Task and Data>>>
<<<Robustness Evaluation>>>
To evaluate model robustness, we devise a test set consisting of ‘adversarial’ examples, i.e, perturbed examples that can potentially change the base model's prediction. These could stem from paraphrasing a sentence, e.g., lexical and syntactical changes. We use two approaches described in literature: back-translation and noisy sequence autoencoder. Note that these examples resemble black-box attacks but are not intentionally designed to fool the system and hence, we use the term 'adversarial' broadly. We use these techniques to produce many paraphrases and find a subset of utterances that though very similar to the original test set, result in wrong predictions. We will measure the model robustness against such changes.
Also note that to make the test set hard, we select only the examples for which the model prediction is different for the paraphrased sentence compared to the original sentence. We, however, do not use the original annotation for the perturbed sentences – instead, we re-annotate the sentences manually. We explain the motivation and methodology for manual annotation later in this section.
<<<Automatically Generating Examples>>>
We describe two methods of devising untargeted (not targeted towards a particular label) paraphrase generation to find a subset that dramatically reduce the accuracy of the model mentioned in the previous section. We follow BIBREF11 and BIBREF15 to generate the potential set of sentences.
<<<Back-translation>>>
Back-translation is a common technique in Machine Translation (MT) to improve translation performance, especially for low-resource language pairs BIBREF16, BIBREF17, BIBREF18. In back-translation, a MT system is used to translate the original sentences to an auxiliary language and a reverse MT system translates them back into the original language. At the final decoding phase, the top k beams are the variations of the original sentence. See Fig. FIGREF5. BIBREF11 which showed the effectiveness of simple back-translation in quickly generating adversarial paraphrases and breaking the correctly predicted examples.
To increase diversity, we use two different MT systems and two different auxiliary languages - Czech (cs) and Spanish (es), to use with our training data in English (en). We use the Nematus BIBREF19 pre-trained cs-en model, which was also used in BIBREF11, as well as the FB internal MT system with pre-trained models for cs-en and es-en language pairs.
<<</Back-translation>>>
<<<Noisy Sequence Autoencoder>>>
Following BIBREF15, we train a sequence autoencoder BIBREF20 using all the training data. At test time, we add noise to the last hidden state of the encoder, which is used to decode a variation. We found that not using attention results in more diverse examples, by giving the model more freedom to stray from the original sentence. We again decode the top k beams as variations to the original sentence. We observed that the seq2seq model results in less meaningful sentences than using the MT systems, which have been trained over millions of sentences.
<<</Noisy Sequence Autoencoder>>>
<<</Automatically Generating Examples>>>
<<<Annotation>>>
For each of the above methods, we use the original test data and generate paraphrases using k=5 beams. We remove the beams that are the same as the original sentence after lower-casing. In order to make sure we have a high-quality adversarial test set, we need to manually check the model's prediction on the above automatically-generated datasets. Unlike targeted methods to procure adversarial examples, our datasets have been generated by random perturbations in the original sentences. Hence, we expect that the true adversarial examples would be quite sparse. In order to obviate the need for manual annotation of a large dataset to find these sparse examples, we sample only from the paraphrases for which the predicted intent is different from the original sentence's predicted intent. This significantly increases the chance of encountering an adversarial example. Note that the model accuracy on this test set might not be zero for two reasons: 1) the flipped intent might actually be justified and not a mistake. For example, “Cancel the alarm” and “Pause the alarm” may be considered as paraphrases, but in the dataset they correspond to alarm/cancel and alarm/pause intents, respectively, and 2) the model might have been making an error in the original prediction, which was corrected by the paraphrase. (However, upon manual observation, this rarely happens).
The other reason that we need manual annotation is that such unrestricted generation may result in new variations that can be meaningless or ambiguous without any context. Note that if the meaning can be easily inferred, we do not count slight grammatical errors as meaningless. Thus, we manually double annotate the sentences with flipped intent classification where the disagreements are resolved by a third annotator. As a part of this manual annotation, we also remove the meaningless and ambiguous sentences. Note that these adversarial examples are untargeted, i.e., we had no control in which new label a perturbed example would be sent to.
<<</Annotation>>>
<<<Analysis>>>
We have shown adversarial examples from different sources alongside their original sentence in Table TABREF3. We observe that some patterns, such as addition of a definite article or gerund appear more often in the es test set which perhaps stems from the properties of the Spanish language (i.e., most nouns have an article and present simple/continuous tense are often interchangeable). On the other hand, there is more verbal diversity in the cs test set which may be because of the linguistic distance of Czech from English compared with Spanish. Moreover, we observe many imperative-to-declarative transformation in all the back-translated examples. Finally, the seq2seq examples seem to have a higher degree of freedom but that can tip them off into the meaningless realm more often too.
<<</Analysis>>>
<<</Robustness Evaluation>>>
<<<Base Model>>>
A commonly used architecture for the task described in Section SECREF2 is a bidirectional LSTM for the sentence representation with separate projection layers for sentence (intent) classification and sequence word (slot) tagging BIBREF21, BIBREF22, BIBREF12, BIBREF14. In order to evaluate the model in a task oriented setting, exact match accuracy (from now on, accuracy) is of paramount importance. This is defined as the percentage of the sentences for which the intent and all the slots have been correctly tagged.
We use two biLSTM layers of size 200 and two feed-forward layers for the intents and the slots. We use dropout of $0.3$ and train the model for 20 epochs with learning rate of $0.01$ and weight decay of $0.001$. This model, our baseline, achieves $87.1\%$ accuracy over the test set.
The performance of the base model described in the previous section is shown in the first row of Table TABREF8 for the Nematus cs-en ($\bar{cs}$), FB MT system cs-en (cs) and es-en (es), sequence autoencoder (seq2seq), and the average of the adversarial sets (avg). We also included the results for the ensemble model, which combines the decisions of five separate baseline models that differ in batch order, initialization, and dropout masking. We can see that, similar to the case in computer vision BIBREF4, the adversarial examples seem to stem from fundamental properties of the neural networks and ensembling helps only a little.
<<</Base Model>>>
<<<Approaches to Improve Robustness>>>
In order to improve robustness of the base model against paraphrases and random noise, we propose two approaches: data augmentation and model smoothing via adversarial logit pairing. Data augmentation generates and adds training data without manual annotation. This would help the model see variations that it has not observed in the original training data. As discussed before, back-translation is one way to generate unlabeled data automatically. In this paper, we show how we can automatically generate labels for such sentences during training time and show that it improves the robustness of the model. Note that for our task we have to automatically label both sentence labels (intent) and word tags (slots) for such sentences.
The second method we propose is adding logit pairing loss. Unlike data augmentation, logit pairing treats the original and paraphrased sentence sets differently. As such, in addition to the cross-entropy loss over the original training data, we would have another loss term enforcing that the predictions for a sentence and its paraphrases are similar in the logit space. This would ensure that the model makes smooth decisions and prevent the model from making drastically different decisions with small perturbations.
<<<Data Augmentation>>>
We generate back-translated data from the training data using pre-trained FB MT system. We keep the top 5 beams after the back-translation and remove the beams that already exist in the training data after lower-casing. We observed that including the top 5 beams results in quite diverse combinations without hurting the readability of the sentences. In order to use the unlabeled data, we use an extended version of self training BIBREF23 in which the original classifier is used to annotate the unlabeled data. Unsurprisingly, self-training can result in reinforcing the model errors. Since the sentence intents usually remain the same after paraphrasing for each paraphrase, we annotate its intent as the intent of the original sentence. Since many slot texts may be altered or removed during back-translation, we use self-training to label the slots of the paraphrases. We train the model on the combined clean and noisy datasets with the loss function being the original loss plus the loss on back-translated data weighted by 0.1 for which the accuracy on the clean dev set is still negligible. The model seemed to be quite insensitive against this weight, though and the clean dev accuracy was hurt by less than 1 point using weighing the augmented data equally as the original data. The accuracy over the clean test set using the augmented training data having Czech (cs) and Spanish (es) as the auxiliary languages are shown in Table TABREF8.
We observe that, as expected, data augmentation improves accuracy on sentences generated using back-translation, however we see that it also improves accuracy on sentences generated using seq2seq autoencoder. We discuss the results in more detail in the next section.
<<</Data Augmentation>>>
<<<Model smoothing via Logit Pairing>>>
BIBREF6 perturb images with the attacks introduced by BIBREF3 and report state-of-the-art results by matching the logit distribution of the perturbed and original images instead of matching only the classifier decision. They also introduce clean pairing in which the logit pairing is applied to random data points in the clean training data, which yields surprisingly good results. Here, we modify both methods for the language understanding task, including sequence word tagging, and expand the approach to targeted pairing for increasing robustness against adversarial examples.
<<<Clean Logit Pairing>>>
Pairing random queries as proposed by BIBREF6 performed very poorly on our task. In the paper, we study the effects when we pair the sentences that have the same annotations, i.e., same intent and same slot labels. Consider a batch $M$, with $m$ clean sentences. For each tuple of intent and slot labels, we identify corresponding sentences in the batch, $M_k$ and sample pairs of sentences. We add a second cost function to the original cost function for the batch that enforces the logit vectors of the intent and same-label slots of those pairs of sentences to have similar distributions:
where $I^{(i)}$ and $S^{(i)}_s$ denote the logit vectors corresponding to the intent and $s^{th}$ slot of the $i^{th}$ sentence, respectively. Moreover, $P$ is the total number of sampled pairs, and $\lambda _{sf}$ is a hyper-parameter. We sum the above loss for all the unique tuples of labels and normalize by the total number of pairs. Throughout this section, we use MSE loss for the function $L()$. We train the model with the same parameters as in Section SECREF2, with the only difference being that we use learning rate of $0.001$ and train for 25 epochs to improve model convergence. Contrary to what we expected, clean logit pairing on its own reduces accuracy on both clean and adversarial test sets. Our hypothesis is that the logit smoothing resulted by this method prevents the model from using weakly correlated features BIBREF5, which could have helped the accuracy of both the clean and adversarial test sets.
<<</Clean Logit Pairing>>>
<<<Adversarial Logit Pairing (ALP)>>>
In order to make the model more robust to paraphrases, we pair a sentence with its back-translated paraphrases and impose the logit distributions to be similar. We generate the paraphrases using the FB MT system as in the previous section using es and cs as auxiliary languages. For the sentences $m^{(i)}$ inside the mini-batch and their paraphrase $\tilde{m}^{(i)}_k$, we add the following loss
where $P$ is the total number of original-paraphrase sentence pairs. Note that the first term, which pairs the logit vectors of the predicted intents of a sentence and its paraphrase, can be obtained in an unsupervised fashion. For the second term however, we need to know the position of the slots in the paraphrases in order to be matched with the original slots. We use self-training again to tag the slots in the paraphrased sentence. Then, we pair the logit vectors corresponding to the common labels found among the original and paraphrases slots left to right. We also find that adding a similar loss for pairs of paraphrases of the original sentence, i.e. matching the logit vectors corresponding to the intent and slots, can help the performance on the accuracy over the adversarial test sets. In Table TABREF8, we show the results using ALP (using both the original-paraphrase and paraphrase-paraphrase pairs) for $\lambda _a=0.01$.
<<</Adversarial Logit Pairing (ALP)>>>
<<</Model smoothing via Logit Pairing>>>
<<</Approaches to Improve Robustness>>>
<<<Results and Discussion>>>
We observe that data augmentation using back-translation improves the accuracy across all the adversarial sets, including the seq2seq test set. Unsurprisingly, the gains are the highest when augmenting the training data using the same MT system and the same auxiliary language that the adversarial test set was generated from. However, more interestingly, it is still effective for adversarial examples generated using a different auxiliary language or a different MT system (which, as discussed in the previous section, yielded different types of sentences) from that which was used at the training time. More importantly, even if the generation process is different altogether, that is, the seq2seq dataset generated by the noisy autoencoder, some of the gains are still transferred and the accuracy over the adversarial examples increases. We also train a model using the es and cs back-translated data combined. Table TABREF8 shows that this improves the average performance over the adversarial sets.
This suggests that in order to achieve robustness towards different types of paraphrasing, we would need to augment the training data using data generated with various techniques. But one can hope that some of the defense would be transferred for adversarial examples that come from unknown sources. Note that unlike the manually annotated test sets, the augmented training data contains noise both in the generation step (e.g. meaningless utterances) as well as in the automatic annotation step. But the model seems to be quite robust toward this random noise; its accuracy over the clean test set is almost unchanged while yielding nontrivial gains over the adversarial test sets.
We observe that ALP results in similarly competitive performance on the adversarial test sets as using the data augmentation but it has a more detrimental effect on the clean test set accuracy. We hypothesize that data augmentation helps with smoothing the decision boundaries without preventing the model from using weakly correlated features. Hence, the regression on the clean test set is very small. This is in contrast with adversarial defense mechanisms such as ALP BIBREF5 which makes the model regress much more on the clean test set.
We also combine ALP with the data augmentation technique that yields the highest accuracy on the adversarial test sets but incurs additional costs to the clean test set (more than three points compared with the base model). Adding clean logit pairing to the above resulted in the most defense transfer (i.e. accuracy on the seq2seq adversarial test set) but it is detrimental to almost all the other metrics. One possible explanation can be that the additional regularization stemming from the clean logit pairing helps with generalization (and hence, the transfer) from the back-translated augmented data to the seq2seq test set but it is not helpful otherwise.
<<</Results and Discussion>>>
<<<Related Work>>>
Adversarial examples BIBREF4 refer to intentionally devised inputs by an adversary which causes the model's accuracy to make highly-confident but erroneous predictions, e.g. Fast Gradient Sign Attack (FGSA) BIBREF4 and Projected gradient Descent (PGD) BIBREF3. In such methods, the constrained perturbation that (approximately) maximizes the loss for an original data point is added to it. In white-box attacks, the perturbations are chosen to maximize the model loss for the original inputs BIBREF4, BIBREF3, BIBREF24. Such attacks have shown to be transferable to other models which makes it possible to devise black-box attacks for a machine learning model by transferring from a known model BIBREF25, BIBREF1.
Defense against such examples has been an elusive task, with proposed mechanisms proving effective against only particular attacks BIBREF3, BIBREF26. Adversarial training BIBREF4 augments the training data with carefully picked perturbations during the training time, which is robust against normed-ball perturbations. But in the general setting of having unrestricted adversarial examples, these defenses have been shown to be highly ineffective BIBREF27.
BIBREF28 introduced white-box attacks for language by swapping one token for another based on the gradient of the input. BIBREF29 introduced an algorithm to generate adversarial examples for sentiment analysis and textual entailment by replacing words of the sentence with similar tokens that preserve the language model scoring and maximize the target class probability. BIBREF7 introduced one of the few defense mechanisms for NLP by extending adversarial training to this domain by perturbing the input embeddings and enforcing the label (distribution) to remain unchanged. BIBREF30 and BIBREF8 used this strategy as a regularization method for part-of-speech, relation extraction and NER tasks. Such perturbations resemble the normed-ball attacks for images but the perturbed input does not correspond to a real adversarial example. BIBREF11 studied two methods of generating adversarial data – back-translation and syntax-controlled sequence-to-sequence generation. They show that although the latter method is more effective in generating syntactically diverse examples, the former is also a fast and effective way of generating adversarial examples.
There has been a large body of literature on language understanding for task oriented dialog using the intent/slot framework. Bidirectional LSTM for the sentence representation alongside separate projection layers for intent and slot tagging is the typical architecture for the joint task BIBREF21, BIBREF22, BIBREF12, BIBREF14.
In parallel to the current work, BIBREF31 introduced unsupervised data augmentation for classification tasks by perturbing the training data and similar to BIBREF7 minimize the KL divergence between the predicted distributions on an unlabeled example and its perturbations. Their goal is to achieve high accuracy using as little labeled data as possible by leveraging the unlabeled data. In this paper, we have focused on increasing the model performance on adversarial test sets in supervised settings while constraining the degradation on the clean test set. Moreover, we focused on a more complicated task: the joint classification and sequence tagging task.
<<</Related Work>>>
<<<Conclusion>>>
In this paper, we study the robustness of language understanding models for the joint task of sentence classification and sequence word tagging in the field of task oriented dialog by generating adversarial test sets. We further discuss defense mechanisms using data augmentation and adversarial logit pairing loss.
We first generate adversarial test sets using two methods, back-translation with two languages and sequence auto-encoder, and observe that the two methods generate different types of sentences. Our experiments show that creating the test set using a combination of the two methods above is better than either method alone, based on the model's performance on the test sets. Secondly, we propose how to improve the model's robustness against such adversarial test sets by both augmenting the training data and using a new loss function based on logit pairing with back-translated paraphrases annotated using self-training. The experiments show that combining data augmentation using back-translation and adversarial logit pairing loss performs best on the adversarial test sets.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Introduction, Task and Data"
],
"type": "disordered_section"
}
|
2004.01670
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Directions in Abusive Language Training Data: Garbage In, Garbage Out
<<<Abstract>>>
Data-driven analysis and detection of abusive online content covers many different tasks, phenomena, contexts, and methodologies. This paper systematically reviews abusive language dataset creation and content in conjunction with an open website for cataloguing abusive language data. This collection of knowledge leads to a synthesis providing evidence-based recommendations for practitioners working with this complex and highly diverse data.
<<</Abstract>>>
<<<Introduction>>>
Abusive online content, such as hate speech and harassment, has received substantial attention over the past few years for its malign social effects. Left unchallenged, abusive content risks harminng those who are targeted, toxifying public discourse, exacerbating social tensions and could lead to the exclusion of some groups from public spaces. As such, systems which can accurately detect and classify online abuse at scale, in real-time and without bias are of central interest to tech companies, policymakers and academics.
Most detection systems rely on having the right training dataset, reflecting one of the most widely accepted mantras in computer science: Garbage In, Garbage Out. Put simply: to have systems which can detect and classify abusive online content effectively, one needs appropriate datasets with which to train them. However, creating training datasets is often a laborious and non-trivial task – and creating datasets which are non-biased, large and theoretically-informed is even more difficult (BIBREF0 p. 189). We address this issue by examining and reviewing publicly available datasets for abusive content detection, which we provide access to on a new dedicated website, hatespeechdata.com.
In the first section, we examine previous reviews and present the four research aims which guide this paper. In the second section, we conduct a critical and in-depth analysis of the available datasets, discussing first what their aim is, how tasks have been described and what taxonomies have been constructed and then, second, what they contain and how they were annotated. In the third section, we discuss the challenges of open science in this research area and elaborates different ways of sharing training datasets, including the website hatespeechdata.com In the final section, we draw on our findings to establish best practices when creating datasets for abusive content detection.
<<</Introduction>>>
<<<Background>>>
The volume of research examining the social and computational aspects of abusive content detection has expanded prodigiously in the past five years. This has been driven by growing awareness of the importance of the Internet more broadly BIBREF1, greater recognition of the harms caused by online abuse BIBREF2, and policy and regulatory developments, such as the EU's Code of Conduct on Hate, the UK Government's `Online Harms' white paper BIBREF3, Germany's NetzDG laws, the Public Pledge on Self-Discipline for the Chinese Internet Industry, and France's anti-hate regulation BIBREF2. In 2020 alone, three computer science venues will host workshops on online hate (TRAC and STOC at LREC, and WOAH at EMNLP), and a shared task at 2019's SemEval on online abuse detection reports that 800 teams downloaded the training data and 115 submitted detection systems BIBREF4. At the same time, social scientific interventions have also appeared, deepening our understanding of how online abuse spreads BIBREF5 and how its harmful impact can be mitigated and challenged BIBREF6.
All analyses of online abuse ultimately rely on a way of measuring it, which increasingly means having a method which can handle the sheer volume of content produced, shared and engaged with online. Traditional qualitative methods cannot scale to handle the hundreds of millions of posts which appear on each major social media platform every day, and can also introduce inconsistencies and biase BIBREF7. Computational tools have emerged as the most promising way of classifying and detecting online abuse, drawing on work in machine learning, Natural Language Processing (NLP) and statistical modelling. Increasingly sophisticated architectures, features and processes have been used to detect and classify online abuse, leveraging technically sophisticated methods, such as contextual word embeddings, graph embeddings and dependency parsing. Despite their many differences BIBREF8, nearly all methods of online abuse detection rely on a training dataset, which is used to teach the system what is and is not abuse. However, there is a lacuna of research on this crucial aspect of the machine learning process. Indeed, although several general reviews of the field have been conducted, no previous research has reviewed training datasets for abusive content detection in sufficient breadth or depth. This is surprising given (i) their fundamental importance in the detection of online abuse and (ii) growing awareness that several existing datasets suffer from many flaws BIBREF9, BIBREF10. Close relevant work includes:
Schmidt and Wiegand conduct a comprehensive review of research into the detection and classification of abusive online content. They discuss training datasets, stating that `to perform experiments on hate speech detection, access to labelled corpora is essential' (BIBREF8, p. 7), and briefly discuss the sources and size of the most prominent existing training datasets, as well as how datasets are sampled and annotated. Schmidt and Wiegand identify two key challenges with existing datasets. First, `data sparsity': many training datasets are small and lack linguistic variety. Second, metadata (such as how data was sampled) is crucial as it lets future researchers understand unintended biases, but is often not adequately reported (BIBREF8, p. 6).
Waseem et al.BIBREF11 outline a typology of detection tasks, based on a two-by-two matrix of (i) identity- versus person- directed abuse and (ii) explicit versus implicit abuse. They emphasise the importance of high-quality datasets, particularly for more nuanced expressions of abuse: `Without high quality labelled data to learn these representations, it may be difficult for researchers to come up with models of syntactic structure that can help to identify implicit abuse.' (BIBREF11, p. 81)
Jurgens et al.BIBREF12 discuss also conduct a critical review of hate speech detection, and note that `labelled ground truth data for building and evaluating classifiers is hard to obtain because platforms typically do not share moderated content due to privacy, ethical and public relations concerns.' (BIBREF12, p. 3661) They argue that the field needs to `address the data scarcity faced by abuse detection research' in order to better address more complex rsearch issues and pressing social challenges, such as `develop[ing] proactive technologies that counter or inhibit abuse before it harms' (BIBREF12, pp. 3658, 3661).
Vidgen et al. describe several limitations with existing training datasets for abusive content, most noticeably how `they contain systematic biases towards certain types and targets of abuse.' BIBREF13[p.2]. They describe three issues in the quality of datasets: degradation (whereby datasets decline in quality over time), annotation (whereby annotators often have low agreement, indicating considerable uncertainty in class assignments) and variety (whereby `The quality, size and class balance of datasets varies considerably.' [p. 6]).
Chetty and AlathurBIBREF14 review the use of Internet-based technologies and online social networks to study the spread of hateful, offensive and extremist content BIBREF14. Their review covers both computational and legal/social scientific aspects of hate speech detection, and outlines the importance of distinguishing between different types of group-directed prejudice. However, they do not consider training datasets in any depth.
Fortuna and NunesBIBREF15 provide an end-to-end review of hate speech research, including the motivations for studying online hate, definitional challenges, dataset creation/sharing, and technical advances, both in terms of feature selection and algorithmic architecture (BIBREF15, 2018). They delineate between different types of online abuse, including hate, cyberbullying, discrimination and flaming, and add much needed clarity to the field. They show that (1) dataset size varies considerably but they are generally small (mostly containing fewer than 10,000 entries), (2) Twitter is the most widely-studied platform, and (3) most papers research hate speech per se (i.e. without specifying a target). Of those which do specify a target, racism and sexism are the most researched. However, their review focuses on publications rather than datasets: the same dataset might be used in multiple studies, limiting the relevance of their review for understanding the intrinsic role of training datasets. They also only engage with datasets fairly briefly, as part of a much broader review.
Several classification papers also discuss the most widely used datasets, including Davidson et al. BIBREF16 who describe five datasets, and Salminen et al. who review 17 datasets and describe four in detail BIBREF17.
This paper addresses this lacuna in existing research, providing a systematic review of available training datasets for online abuse. To provide structure to this review, we adopt the `data statements' framework put forward by Bender and Friedman BIBREF18, as well as other work providing frameworks, schema and processes for analysing NLP artefacts BIBREF19, BIBREF20, BIBREF21. Data statements are a way of documenting the decisions which underpin the creation of datasets used for Natural Language Processing (NLP). They formalise how decisions should be documented, not only ensuring scientific integrity but also addressing `the open and urgent question of how we integrate ethical considerations in the everyday practice of our field' (BIBREF18, p. 587). In many cases, we find that it is not possible to fully recreate the level of detail recorded in an original data statement from how datasets are described in publications. This reinforces the importance of proper documentation at the point of dataset creation.
As the field of online abusive content detection matures, it has started to tackle more complex research challenges, such as multi-platform, multi-lingual and multi-target abuse detection, and systems are increasingly being deployed in `the wild' for social scientific analyses and for content moderation BIBREF5. Such research heightens the focus on training datasets as exactly what is being detected comes under greater scrutiny. To enhance our understanding of this domain, our review paper has four research aims.
Research Aim One: to provide an in-depth and critical analysis of the available training datasets for abusive online content detection.
Research Aim Two: to map and discuss ways of addressing the lack of dataset sharing, and as such the lack of `open science', in the field of online abuse research.
Research Aim Three: to introduce the website hatespeechdata.com, as a way of enabling more dataset sharing.
Research Aim Four: to identify best practices for creating an abusive content training dataset.
<<</Background>>>
<<<Analysis of training datasets>>>
Relevant publications have been identified from four sources to identify training datasets for abusive content detection:
The Scopus database of academic publications, identified using keyword searches.
The ACL Anthology database of NLP research papers, identified using keyword searches.
The ArXiv database of preprints, identified using keyword searches.
Proceedings of the 1st, 2nd and 3rd workshops on abusive language online (ACL).
Most publications report on the creation of one abusive content training dataset. However, some describe several new datasets simultaneously or provide one dataset with several distinct subsets of data BIBREF22, BIBREF23, BIBREF24, BIBREF25. For consistency, we separate out each subset of data where they are in different languages or the data is collected from different platforms. As such, the number of datasets is greater than the number publications. All of the datasets were released between 2016 and 2019, as shown in Figure FIGREF17.
<<<The purpose of training datasets>>>
<<<Problems addressed by datasets>>>
Creating a training dataset for online abuse detection is typically motivated by the desire to address a particular social problem. These motivations can inform how a taxonomy of abusive language is designed, how data is collected and what instructions are given to annotators. We identify the following motivating reasons, which were explicitly referenced by dataset creators.
Reducing harm: Aggressive, derogatory and demeaning online interactions can inflict harm on individuals who are targeted by such content and those who are not targeted but still observe it. This has been shown to have profound long-term consequences on individuals' well-being, with some vulnerable individuals expressing concerns about leaving their homes following experiences of abuse BIBREF26. Accordingly, many dataset creators state that aggressive language and online harassment is a social problem which they want to help address
Removing illegal content: Many countries legislate against certain forms of speech, e.g. direct threats of violence. For instance, the EU's Code of Conduct requires that all content that is flagged for being illegal online hate speech is reviewed within 24 hours, and removed if necessary BIBREF27. Many large social media platforms and tech companies adhere to this code of conduct (including Facebook, Google and Twitter) and, as of September 2019, 89% of such content is reviewed in 24 hours BIBREF28. However, we note that in most cases the abuse that is marked up in training datasets falls short of the requirements of illegal online hate – indeed, as most datasets are taken from public API access points, the data has usually already been moderated by the platforms and most illegal content removed.
Improving health of online conversations: The health of online communities can be severely affected by abusive language. It can fracture communities, exacerbate tensions and even repel users. This is not only bad for the community and for civic discourse in general, it also negatively impacts engagement and thus the revenue of the host platforms. Therefore, there is a growing impetus to improve user experience and ensure online dialogues are healthy, inclusive and respectful where possible. There is ample scope for improvement: a study showed that 82% of personal attacks on Wikipedia against other editors are not addressed BIBREF29. Taking steps to improve the health of exchanges in online communities will also benefit commercial and voluntary content moderators. They are routinely exposed to such content, often with insufficient safeugards, and sometimes display symptoms similar to those of PTSD BIBREF30. Automatic tools could help to lessen this exposure, reducing the burden on moderators.
<<</Problems addressed by datasets>>>
<<<Uses of datasets: How detection tasks are defined>>>
Myriad tasks have been addressed in the field of abusive online content detection, reflecting the different disciplines, motivations and assumptions behind research. This has led to considerable variation in what is actually detected under the rubric of `abusive content', and establishing a degree of order over the diverse categorisations and subcategorisations is both difficult and somewhat arbitrary. Key dimensions which dataset creators have used to categorise detection tasks include who/what is targeted (e.g. groups vs. individuals), the strength of content (e.g. covert vs. overt), the nature of the abuse (e.g. benevolent vs. hostile sexism BIBREF31), how the abuse manifests (e.g. threats vs. derogatory statements), the tone (e.g. aggressive vs. non-aggressive), the specific target (e.g. ethnic minorities vs. women),and the subjective perception of the reader (e.g. disrespectful vs. respectful). Other important dimensions include the theme used to express abuse (e.g. Islamophobia which relies on tropes about terrorism vs. tropes about sexism) and the use of particular linguistic devices, such as appeals to authority, sincerity and irony. All of these dimensions can be combined in different ways, producing a large number of intersecting tasks.
Consistency in how tasks are described will not necessarily ensure that datasets can be used interchangeably. From the description of a task, an annotation framework must be developed which converts the conceptualisation of abuse into a set of standards. This formalised representation of the `abuse' inevitably involves shortcuts, imperfect rules and simplifications. If annotation frameworks are developed and applied differently, then even datasets aimed at the same task can still vary considerably. Nonetheless, how detection tasks for online abuse are described is crucial for how the datasets – and in turn the systems trained on them – can subsequently be used. For example, a dataset annotated for hate speech can be used to examine bigoted biases, but the reverse is not true. How datasets are framed also impacts whether, and how, datasets can be combined to form large `mega-datasets' – a potentially promising avenue for overcoming data sparsity BIBREF17.
In the remainder of this section, we provide a framework for splitting out detection tasks along the two most salient dimensions: (1) the nature of abuse and (2) the granularity of the taxonomy.
<<<Detection tasks: the nature of abuse>>>
This refers to what is targeted/attacked by the content and, subsequently, how the taxonomy has been designed/framed by the dataset creators. The most well-established taxonomic distinction in this regard is the difference between (i) the detection of interpersonal abuse, and (ii) the detection of group-directed abuse BIBREF11). Other authors have sought to deductively theorise additional categories, such as `concept-directed' abuse, although these have not been widely adopted BIBREF13. Through an inductive investigation of existing training datasets, we extend this binary distinction to four primary categories of abuse which have been studied in previous work, as well as a fifth `Mixed' category.
Person-directed abuse. Content which directs negativity against individuals, typically through aggression, insults, intimidation, hostility and trolling, amongst other tactics. Most research falls under the auspices of `cyber bullying', `harassment' and `trolling' BIBREF23, BIBREF32, BIBREF33. One major dataset of English Wikipedia editor comments BIBREF29 focuses on the `personal attack' element of harassment, drawing on prior investigations that mapped out harassment in that community. Another widely used dataset focuses on trolls' intent to intimidate, distinguishing between direct harassment and other behaviours BIBREF34. An important consideration in studies of person-directed abuse is (a) interpersonal relations, such as whether individuals engage in patterns of abuse or one-off acts and whether they are known to each other in the `real' world (both of which are a key concern in studies of cyberbullying) and (b) standpoint, such as whether individuals directly engage in abuse themselves or encourage others to do so. For example, the theoretically sophisticated synthetic dataset provided by BIBREF33 identifies not only harassment but also encouragement to harassment. BIBREF22 mark up posts from computer game forums (World of Warcraft and League of Legends) for cyberbullying and annotate these as $\langle $offender, victim, message$\rangle $ tuples.
Group-directed abuse. Content which directs negativity against a social identity, which is defined in relation to a particular attribute (e.g. ethnic, racial, religious groups)BIBREF35. Such abuse is often directed against marginalised or under-represented groups in society. Group-directed abuse is typically described as `hate speech' and includes use of dehumanising language, making derogatory, demonising or hostile statements, making threats, and inciting others to engage in violence, amongst other dangerous communications. Common examples of group-directed abuse include sexism, which is included in datasets provided by BIBREF36, BIBREF37, BIBREF38, BIBREF39, BIBREF33 and racism, which is directly targeted in BIBREF36, BIBREF40. In some cases, specific types of group-directed abuse are subsumed within a broader category of identity-directed abuse, as in BIBREF41, BIBREF42, BIBREF4. Determining the limits of any group-directed abuse category requires careful theoretical reflection, as with the decision to include ethnic, caste-based and certain religious prejudices under `racism'. There is no `right' answer to such questions as they engage with ontological concerns about identification and `being' and the politics of categorization.
Flagged content. Content which is reported by community members or assessed by community and professional content moderators. This covers a broad range of focuses as moderators may also remove spam, sexually inappropriate content and other undesirable contributions. In this regard, `flagged' content is akin to the concept of `trolling', which covers a wide range of behaviours, from jokes and playful interventions through to sinister personal attacks such as doxxing BIBREF43. Some forms of trolling can be measured with tools such as the Global Assessment of Internet Trolling (GAIT) BIBREF43.
Incivility. Content which is considered to be incivil, rude, inappropriate, offensive or disrespectful BIBREF24, BIBREF25, BIBREF44. Such categories are usually defined with reference to the tone that the author adopts rather than the substantive content of what they express, which is the basis of person- and group- directed categories. Such content usually contains obscene, profane or otherwise `dirty' words. This can be easier to detect as closed-class lists are effective at identifying single objectionable words (e.g. BIBREF45). However, one concern with this type of research is that the presence of `dirty' words does not necessarily signal malicious intent or abuse; they may equally be used as intensifiers or colloquialisms BIBREF46. At the same time, detecting incivility can be more difficult as it requires annotators to infer the subjective intent of the speaker or to understand (or guess) the social norms of a setting and thus whether disrespect has been expressed BIBREF42. Content can be incivil without directing hate against a group or person, and can be inappropriate in one setting but not another: as such it tends to be more subjective and contextual than other types of abusive language.
Mixed. Content which contains multiple types of abuse, usually a combination of the four categories discussed above. The intersecting nature of online language means that this is common but can also manifest in unexpected ways. For instance, female politicians may receive more interpersonal abuse than other politicians. This might not appear as misogyny because their identity as women is not referenced – but it might have motivated the abuse they were subjected to. Mixed forms of abuse require further research, and have thus far been most fully explored in the OLID dataset provided by BIBREF4, who explore several facets of abuse under one taxonomy.
<<</Detection tasks: the nature of abuse>>>
<<<Detection tasks: Granularity of taxonomies>>>
This refers to how much detail a taxonomy contains, reflected in the number of unique classes. The most important and widespread distinction is whether a binary class is used (e.g. Hate / Not) or a multi-level class, such as a tripartite split (typically, Overt, Covert and Non-abusive). In some cases, a large number of complex classes are created, such as by combining whether the abuse is targeted or not along with its theme and strength.
In general, Social scientific analyses encourage creating a detailed taxonomy with a large number of fine-grained categories. However, this is only useful for machine learning if there are enough data points in each category and if annotators are capable of consistently distinguishing between them. Complex annotation schemas may not result in better training datasets if they are not implemented in a robust way. As such, it is unsurprising that binary classification schemas are the most prevalent, even though they are arguably the least useful given the variety of ways in which abuse can be articulated. This can range from the explicit and overt (e.g. directing threats against a group) to more subtle behaviours, such as micro-aggressions and dismissing marginalised groups' experiences of prejudice. Subsuming both types of behaviour within one category not only risks making detection difficult (due to considerable in-class variation) but also leads to a detection system which cannot make important distinctions between qualitatively different types of content. This has severe implications for whether detection systems trained on such datasets can actually be used for downstream tasks, such as content moderation and social scientific analysis.
Drawing together the nature and granularity of abuse, our analyses identify a hierarchy of taxonomic granularity from least to most granular:
Binary classification of a single `meta' category, such as hate/not or abuse/not. This can lead to very general and vague research, which is difficult to apply in practice.
Binary classification of a single type of abuse, such as person-directed or group-directed. This can be problematic given that abuse is nearly always directed against a group rather than `groups' per se.
Binary classification of abuse against a single well-defined group, such as racism/not or Islamophobia/not, or interpersonal abuse against a well-defined cohort, such as MPs and young people.
Multi-class (or multi-label) classification of different types of abuse, such as:
Multiple targets (e.g. racist, sexist and non-hateful content) or
Multiple strengths (e.g. none, implicit and explicit content).
Multiple types (e.g. threats versus derogatory statements or benevolent versus hostile statements).
Multi-class classification of different types of abuse which is integrated with other dimensions of abuse.
<<</Detection tasks: Granularity of taxonomies>>>
<<</Uses of datasets: How detection tasks are defined>>>
<<</The purpose of training datasets>>>
<<<The content of training datasets>>>
<<<The `Level' of content>>>
49 of the training datasets are annotated at the level of the post, one dataset is annotated at the level of the user BIBREF47, and none of them are annotated at the level of the comment thread. Only two publications indicate that the entire conversational thread was presented to annotators when marking up individual entries, meaning that in most cases this important contextual information is not used. 49 of the training datasets contain only text. This is a considerable limitation of existing research BIBREF13, especially given the multimodal nature of online communication and the increasing ubiquity of digital-specific image-based forms of communication such as Memes, Gifs, Filters and Snaps BIBREF48. Although some work has addressed the task of detecting hateful images BIBREF49, BIBREF50, this lead to the creation of a publically available labelled training dataset in only one case BIBREF51. To our knowledge, no research has tackled the problem of detecting hateful audio content. This is a distinct challenge; alongside the semantic content audio also contains important vocal cues which provide more opportunities to investigate (but also potentially misinterpret) tone and intention.
<<</The `Level' of content>>>
<<<Language>>>
The most common language in the training datasets is English, which appears in 20 datasets, followed by Arabic and Italian (5 datasets each), Hindi-English (4 datasets) and then German, Indonesian and Spanish (3 datasets). Noticeably, several major languages, both globally and in Europe, do not appear, which suggests considerable unevenness in the linguistic and cultural focuses of abusive language detection. For instance, there are major gaps in the coverage of European languages, including Danish and Dutch. Surprisingly, French only appears once. The dominance of English may be due to how we sampled publications (for which we used English terms), but may also reflect different publishing practices in different countries and how well-developed abusive content research is.
<<</Language>>>
<<<Source of data>>>
Training datasets use data collected from a range of online spaces, including from mainstream platforms, such as Twitter, Wikipedia and Facebook, to more niche forums, such as World of Warcraft and Stormfront. In most cases, data is collected from public sources and then manually annotated but in others data is sourced through proprietary data sharing agreements with host platforms. Unsurprisingly, Twitter is the most widely used source of data, accounting for 27 of the datasets. This reflects wider concerns in computational social research that Twitter is over-used, primarily because it has a very accessible API for data collection BIBREF52, BIBREF53. Facebook and Wikipedia are the second most used sources of data, accounting for three datasets each – although we note that all three Wikipedia datasets are reported in the same publication. Many of the most widely used online platforms are not represented at all, or only in one dataset, such as Reddit, Weibo, VK and YouTube.
The lack of diversity in where data is collected from limits the development of detection systems. Three main issues emerge:
Linguistic practices vary across platforms. Twitter only allows 280 characters (previously only 140), provoking stylistic changes BIBREF54, and abusive content detection systems trained on this data are unlikely to work as well with longer pieces of text. Dealing with longer pieces of text could necessitate different classification systems, potentially affecting the choice of algorithmic architecture. Additionally, the technical affordances of platforms may affect the style, tone and topic of the content they host.
The demographics of users on different platforms vary considerably. Social science research indicates that `digital divides' exist, whereby online users are not representative of wider populations and differ across different online spaces BIBREF53, BIBREF55, BIBREF56. Blank draws attention to how Twitter users are usually younger and wealthier than offline populations; over reliance on data from Twitter means, in effect, that we are over-sampling data from this privileged section of society. Blank also shows that there are also important cross-national differences: British Twitters are better-educated than the offline British population but the same is not true for American Twitter users compared with the offline American population BIBREF56. These demographic differences are likely to affect the types of content that users produce.
Platforms have different norms and so host different types and amounts of abuse. Mainstream platforms have made efforts in recent times to `clean up' content and so the most overt and aggressive forms of abuse, such as direct threats, are likely to be taken down BIBREF57. However, more niche platforms, such as Gab or 4chan, tolerate more offensive forms of speech and are more likely to contain explicit abuse, such as racism and very intrusive forms of harassment, such as `doxxing' BIBREF58, BIBREF59, BIBREF60. Over-reliance on a few sources of data could mean that datasets are biased towards only a subset of types of abuse.
<<</Source of data>>>
<<<Size>>>
The size of the training datasets varies considerably from 469 posts to 17 million; a difference of four orders of magnitude. Differences in size partly reflect different annotation approaches. The largest datasets are from proprietary data sharing agreements with platforms. Smaller datasets tend to be carefully collected and then manually annotated. There are no established guidelines for how large an abusive language training dataset needs to be. However, smaller datasets are problematic because they contain too little linguistic variation and increase the likelihood of overfitting. Rizoiu et al.BIBREF61 train detection models on only a proportion of the Davidson et al. and Waseem training datasets and show that this leads to worse performance, with a lower F1-Score, particularly for `data hungry' deep learning approaches BIBREF61. At the same time, `big' datasets alone are not a panacea for the challenges of abusive content classification. Large training datasets which have been poorly sampled, annotated with theoretically problematic categories or inexpertly and unthoughtfully annotated, could still lead to the development of poor classification systems.
The challenges posed by small datasets could potentially be overcome through machine learning techniques such as `semi-supervised' and `active' learning BIBREF62, although these have only been limitedly applied to abusive content detection so far BIBREF63. Sharifirad et al. propose using text augmentation and new text generation as a way of overcoming small datasets, which is a promising avenue for future research BIBREF64.
<<</Size>>>
<<<Class distribution and sampling>>>
Class distribution is an important, although often under-considered, aspect of the design of training datasets. Datasets with little abusive content will lack linguistic variation in terms of what is abusive, thereby increasing the risk of overfitting. More concerningly, the class distribution directly affects the nature of the engineering task and how performance should be evaluated. For instance, if a dataset is 70% hate speech then a zero-rule classification system (i.e. where everything is categorised as hate speech) will achieve 70% precision and 100% recall. This should be used as a baseline for evaluating performance: 80% precision is less impressive compared with this baseline. However, 80% precision on an evenly balanced dataset would be impressive. This is particularly important when evaluating the performance of ternary classifiers, when classes can be considerably imbalanced.
On average, 35% of the content in the training datasets is abusive. However, class distributions vary considerably, from those with just 1% abusive content up to 100%. These differences are largely a product of how data is sampled and which platform it is taken from. Bretschneider BIBREF22 created two datasets without using purposive sampling, and as such they contain very low levels of abuse ( 1%). Other studies filter data collection based on platforms, time periods, keywords/hashtags and individuals to increase the prevalence of abuse. Four datasets comprise only abusive content; three cases are synthetic datasets, reported on in one publication BIBREF65, and in the other case the dataset is an amendment to an existing dataset and only contains misogynistic content BIBREF37.
Purposive sampling has been criticised for introducing various forms of bias into datasets BIBREF66, such as missing out on mis-spelled content BIBREF67 and only focusing on the linguistic patterns of an atypical subset of users. One pressing risk is that a lot of data is sampled from far right communities – which means that most hate speech classifiers implicitly pick up on right wing styles of discourse rather than hate speech per se. This could have profound consequences for our understanding of online political dialogue if the classifiers are applied uncritically to other groups. Nevertheless, purposive sampling is arguably a necessary step when creating a training dataset given the low prevalence of abuse on social media in general BIBREF68.
<<</Class distribution and sampling>>>
<<<Identity of the content creators>>>
The identity of the users who originally created the content in training datasets is described in only two cases. In both cases the data is synthetic BIBREF65, BIBREF33. Chung et al. use `nichesourcing' to synthetically generate abuse, with experts in tackling hate speech creating hateful posts. Sprugnoli et al. ask children to adopt pre-defined roles in an experimental classroom setup, and ask them to engage in a cyberbullying scenario. In most of the non-synthetic training datasets, some information is given about the sampling criteria used to collect data, such as hashtags. However, this does not provide direct insight into who the content creators are, such as their identity, demographics, online behavioural patterns and affiliations.
Providing more information about content creators may help address biases in existing datasets. For instance, Wiegand et al. show that 70% of the sexist tweets in the highly cited Waseem and Hovy dataset BIBREF36 come from two content creators and that 99% of the racist tweets come from just one BIBREF66. This is a serious constraint as it means that user-level metadata is artificially highly predictive of abuse. And, even when user-level metadata is not explicitly modelled, detection systems only need to pick up on the linguistic patterns of a few authors to nominally detect abuse. Overall, the complete lack of information about which users have created the content in most training datasets is a substantial limitation which may be driving as-yet-unrecognised biases. This can be remedied through the methodological rigour implicit in including a data statement with a corpus.
<<</Identity of the content creators>>>
<<</The content of training datasets>>>
<<<Annotation of training datasets>>>
<<<Annotation process>>>
How training datasets are annotated is one of the most important aspects of their creation. A range of annotation processes are used in training datasets, which we split into five high-level categories:
Crowdsourcing (15 datasets). Crowdsourcing is widely used in NLP research because it is relatively cheap and easy to implement. The value of crowdsourcing lies in having annotations undertaken by `a large number of non-experts' (BIBREF69, p. 278) – any bit of content can be annotated by multiple annotators, effectively trading quality for quantity. Studies which use crowdsourcing with only a few annotators for each bit of content risk minimising quality without counterbalancing it with greater quantity. Furthermore, testing the work of many different annotators can be challenging BIBREF70, BIBREF71 and ensuring they are paid an ethical amount may make the cost comparable to using trained experts. Crowdsourcing has also been associated with `citizen science' initiatives to make academic research more accessible but this may not be fully realised in cases where annotation tasks are laborious and low-skilled BIBREF72, BIBREF20.
Academic experts (22 datasets). Expert annotation is time-intensive but is considered to produce higher quality annotations. Waseem reports that `systems trained on expert annotations outperform systems trained on amateur annotations.' BIBREF73 and, similarly, D'Orazio et al. claim, `although expert coding is costly, it produces quality data.' BIBREF74. However, the notion of an `expert' remains somewhat fuzzy within abusive content detection research. In many cases, publications only report that `an expert' is used, without specifying the nature of their expertise – even though this can vary substantially. For example, an expert may refer to an NLP practitioner, an undergraduate student with only modest levels of training, a member of an attacked social group relevant to the dataset or a researcher with a doctorate in the study of prejudice. In general, we anticipate that experts in the social scientific study of prejudice/abuse would perform better at annotation tasks then NLP experts who may not have any direct expertise in the conceptual and theoretical issues of abusive content annotation. In particular, one risk of using NLP practitioners, whether students or professionals, is that they might `game' training datasets based on what they anticipate is technically feasible for existing detection systems. For instance, if existing systems perform poorly when presented with long range dependencies, humour or subtle forms of hate (which are nonetheless usually discernible to human readers) then NLP experts could unintentionally use this expectation to inform their annotations and not label such content as hateful.
Professional moderators (3 datasets). Professional moderators offer a standardized approach to content annotated, implemented by experienced workers. This should, in principle, result in high quality annotations. However, one concern is that moderators are output-focused as their work involves determining whether content should be allowed or removed from platforms; they may not provide detailed labels about the nature of abuse and may also set the bar for content labelled `abusive' fairly high, missing out on more nuance and subtle varieties. In most cases, moderators will annotate for a range of unacceptable content, such as spam and sexual content, and this must be marked in datasets.
A mix of crowdsourcing and experts (6 datasets).
Synthetic data creation (4 datasets). Synthetic datasets are an interesting option as they are inherently non-authentic and therefore not necessarily representative of how abuse manifests in real-world situations. However, if they are created in realistic conditions by experts or relevant content creators then they can mimic real behaviour and have the added advantage that they may have broader coverage of different types of abuse. They are also usually easier to share.
<<</Annotation process>>>
<<<Identity of the annotators>>>
The data statements framework given by Bender and Friedman emphasises the importance of understanding who has completed annotations. Knowing who the annotators are is important because `their own “social address" influences their experience with language and thus their perception of what they are annotating.' BIBREF18 In the context of online abuse, Binns et al. show that the gender of annotators systematically influences what annotations they provide BIBREF75. No annotator will be well-versed in all of the slang or coded meanings used to construct abusive language. Indeed, many of these coded meanings are deliberately covert and obfuscated BIBREF76. To help mitigate these challenges, annotators should be (a) well-qualified and (b) diverse. A homogeneous group of annotators will be poorly equipped to catch all instances of abuse in a corpus. Recruiting an intentionally mixed groups of annotators is likely to yield better recall of abuse and thus a more precise dataset BIBREF77.
Information about annotators is unfortunately scarce. In 23 of the training datasets no information is given about the identity of annotators; in 17 datasets very limited information is given, such as whether the annotator is a native speaker of the language; and in just 10 cases is detailed information given. Interestingly, only 4 out of these 10 datasets are in the English language. Relevant information about annotators can be split into (i) Demographic information and (ii) annotators' expertise and experience. In none of the training sets is the full range of annotator information made available, which includes:
Demographic information. The nature of the task affects what information should be provided, as well as the geographic and cultural context. For instance, research on Islamophobia should include, at the very least, information about annotators' religious affiliation. Relevant variables include:
Age
Ethnicity and race
Religion
Gender
Sexual Orientation
Expertise and experience. Relevant variables include:
Field of research
Years of experience
Research status (e.g. research assistant or post-doc)
Personal experiences of abuse. In our review, none of the datasets contained systematic information about whether annotators had been personally targeted by abuse or had viewed such abuse online, even though this can impact annotators' perceptions. Relevant variables include:
Experiences of being targeted by online abuse.
Experiences of viewing online abuse.
<<</Identity of the annotators>>>
<<<Guidelines for annotation>>>
A key source of variation across datasets is whether annotators were given detailed guidelines, very minimal guidelines or no guidelines at all. Analysing this issue is made difficult by the fact that many dataset creators do not share their annotation guidelines. 21 of the datasets we study do not provide the guidelines and 14 only provide them in a highly summarised form. In just 15 datasets is detailed information given (and these are reported on in just 9 publications). Requiring researchers to publish annotation guidelines not only helps future researchers to better understand what datasets contain but also to improve and extend them. This could be crucial for improving the quality of annotations; as Ross et al. recommend, `raters need more detailed instructions for annotation.' BIBREF78
The degree of detail given in guidelines is linked to how the notion of `abuse' is understood. Some dataset creators construct clear and explicit guidelines in an attempt to ensure that annotations are uniform and align closely with social scientific concepts. In other cases, dataset creators allow annotators to apply their own perception. For instance, in their Portuguese language dataset, Fortuna et al. ask annotators to `evaluate if according to your opinion, these tweets contain hate speech' BIBREF38. The risk here is that authors' perceptions may differ considerably; Salminen et al. show that online hate interpretation varies considerably across individuals BIBREF79. This is also reflected in inter-annotator agreement scores for abusive content, which is often very low, particularly for tasks which deploy more than just a binary taxonomy. However, it is unlikely that annotators could ever truly divorce themselves from their own social experience and background to decide on a single `objective' annotation. Abusive content annotation is better understood, epistemologically, as an intersubjective process in which agreement is constructed, rather than an objective process in which a `true' annotation is `found'. For this reason, some researchers have shifted the question of `how can we achieve the correct annotation?' to `who should decide what the correct annotation is?' BIBREF73. Ultimately, whether annotators should be allowed greater freedom in making annotations, and whether this results in higher quality datasets, needs further research and conceptual examination.
Some aspects of abusive language present fundamental issues that are prone to unreliable annotation, such as Irony, Calumniation and Intent. They are intrinsically difficult to annotate given a third-person perspective on a piece of text as they involve making a judgement about indeterminate issues. However, they cannot be ignored given their prevalence in abusive content and their importance to how abuse is expressed. Thus, although they are fundamentally conceptual problems, these issues also present practical problems for annotators, and should be addressed explicitly in coding guidelines. Otherwise, as BIBREF80 note, these issues are likely to drive type II errors in classification, i.e. labelling non-hate-speech utterances as hate speech.
<<<Irony>>>
This covers statements that have a meaning contrary to that one might glean at first reading. Lachenicht BIBREF81 notes that Irony goes against Grice's quality maxim, and as such Ironic content requires closer attention from the reader as it is prone to being misinterpreted. Irony is a particularly difficult issue as in some cases it is primarily intended to provide humour (and thus might legitimately be considered non-abusive) but in other cases is used as a way of veiling genuine abuse. Previous research suggests that the problem is widespread. Sanguinetti et al. BIBREF82 find irony in 11% of hateful tweets in Italian. BIBREF25 find that irony is one of the most common phenomena in self-deleted comments; and that the prevalence of irony is 33.9% amongst deleted comments in a Croatian comment dataset and 18.1% amongst deleted comments in a Slovene comment dataset. Furthermore, annotating irony (as well as related constructs, such as sarcasm and humour) is inherently difficult. BIBREF83 report that agreement on sarcasm amongst annotators working in English is low, something echoed by annotations of Danish content BIBREF84. Irony is also one of the most common reasons for content to be re-moderated on appeal, according to Pavlopoulos et al. BIBREF24.
<<</Irony>>>
<<<Calumniation>>>
This covers false statements, slander, and libel. From the surveyed set, this is annotated in datasets for Greek BIBREF24 and for Croatian and Slovene BIBREF25. Its prevalence varies considerably across these two datasets and reliable estimations of the prevalence of false statements are not available. Calumniation is not only an empirical issue, it also raises conceptual problems: should false information be considered abusive if it slanders or demeans a person? However, if the information is then found out to be true does it make the content any less abusive? Given the contentiousness of `objectivity', and the lack of consensus about most issues in a `post-truth' age BIBREF85, who should decide what is considered true? And, finally, how do we determine whether the content creator knows whether something is true? These ontological, epistemological and social questions are fundamental to the issue of truth and falsity in abusive language. Understandably, most datasets do not taken any perspective on the truth and falsity of content. This is a practical solution: given error rates in abusive language detection as well as error rates in fact-checking, a system which combined both could be inapplicable in practice.
<<</Calumniation>>>
<<<Intent>>>
This information about the utterer's state of mind is a core part of how many types of abusive language are defined. Intent is usually used to emphasize the wrongness of abusive behaviour, such as spreading, inciting, promoting or justifying hatred or violence towards a given target, or sending a message that aims at dehumanising, delegitimising, hurting or intimidating them BIBREF82. BIBREF81 postulate that “aggravation, invective and rudeness ... may be performed with varying degrees of intention to hurt", and cite five legal degrees of intent BIBREF86. However, it is difficult to discern the intent of another speaker in a verbal conversation between humans, and even more difficult to do so through written and computer-mediated communications BIBREF87. Nevertheless, intent is particularly important for some categories of abuse such as bullying, maliciousness and hostility BIBREF34, BIBREF32. Most of the guidelines for the datasets we have studied do not contain an explicit discussion of intent, although there are exceptions. BIBREF88 include intent as a core part of their annotation standard, noting that understanding context (such as by seeing a speakers' other online messages) is crucial to achieving quality annotations. However, this proposition poses conceptual challenges given that people's intent can shift over time. Deleted comments have been used to study potential expressions of regret by users and, as such, a change in their intent BIBREF89, BIBREF25; this has also been reported as a common motivator even in self-deletion of non-abusive language BIBREF90. Equally, engaging in a sequence of targeted abusive language is an indicator of aggressive intent, and appears in several definitions. BIBREF23 require an “intent to physically assert power over women" as a requirement for multiple categories of misogynistic behaviour. BIBREF34 find that messages that are “unapologetically or intentionally offensive" fit in the highest grade of trolling under their schema.
Kenny et al. BIBREF86 note how sarcasm, irony, and humour complicate the picture of intent by introducing considerable difficulties in discerning the true intent of speakers (as discussed above). Part of the challenge is that many abusive terms, such as slurs and insults, are polysemic and may be co-opted by an ingroup into terms of entertainment and endearment BIBREF34.
<<</Intent>>>
<<</Guidelines for annotation>>>
<<</Annotation of training datasets>>>
<<</Analysis of training datasets>>>
<<<Dataset sharing>>>
<<<The challenges and opportunities of achieving Open Science>>>
All of the training datasets we analyse are publicly accessible and as such can be used by researchers other than the authors of the original publication. Sharing data is an important aspect of open science but also poses ethical and legal risks, especially in light of recent regulatory changes, such as the introduction of GPDR in the UK BIBREF91, BIBREF92. This problem is particularly acute with abusive content, which can be deeply shocking, and some training datasets from highly cited publications have not been made publicly available BIBREF93, BIBREF94, BIBREF95. Open science initiatives can also raise concerns amongst the public, who may not be comfortable with researchers sharing their personal data BIBREF96, BIBREF97.
The difficulty of sharing data in sensitive areas of research is reflected by the Islamist extremism research website, `Jihadology'. It chose to restrict public access in 2019, following efforts by Home Office counter-terrorism officials to shut it down completely. They were concerned that, whilst it aimed to support academic research into Islamist extremism, it may have inadvertently enabled individuals to radicalise by making otherwise banned extremist material available. By working with partners such as the not-for-profit Tech Against Terrorism, Jihadology created a secure area in the website, which can only be accessed by approved researchers. Some of the training datasets in our list have similar requirements, and can only be accessed following a registration process.
Open sharing of datasets is not only a question of scientific integrity and a powerful way of advancing scientific knowledge. It is also, fundamentally, a question of fairness and power. Opening access to datasets will enable less-well funded researchers and organisations, which includes researchers in the Global South and those working for not-for-profit organisations, to steer and contribute to research. This is a particularly pressing issue in a field which is directly concerned with the experiences of often-marginalised communities and actors BIBREF36. For instance, one growing concern is the biases encoded in detection systems and the impact this could have when they are applied in real-world settings BIBREF9, BIBREF10. This research could be further advanced by making more datasets and detection systems more easily available. For instance, Binns et al. use the detailed metadata in the datasets provided by Wulczyn et al. to investigate how the demographics of annotators impacts the annotations they make BIBREF75, BIBREF29. The value of such insights is only clear after the dataset has been shared – and, equally, is only possible because of data sharing.
More effective ways of sharing datasets would address the fact that datasets often deteriorate after they have been published BIBREF13. Several of the most widely used datasets provide only the annotations and IDs and must be `rehydrated' to collect the content. Both of the datasets provided by Waseem and Hovy and Founta et al. must be collected in this way BIBREF98, BIBREF36, and both have degraded considerably since they were first released as the tweets are no longer available on Twitter. Chung et al. also estimate that within 12 months the recently released dataset for counterspeech by Mathew et al. had lost more than 60% of its content BIBREF65, BIBREF58. Dataset degradation poses three main risks: First, if less data is available then there is a greater likelihood of overfitting. Second, the class distributions usually change as proportionally more of the abusive content is taken down than the non-abusive. Third, it is also likely that the more overt forms of abuse are taken down, rather than the covert instances, thereby changing the qualitative nature of the dataset.
<<</The challenges and opportunities of achieving Open Science>>>
<<<Research infrastructure: Solutions for sharing training datasets>>>
The problem of data access and sharing remains unresolved in the field of abusive content detection, much like other areas of computational research BIBREF99. At present, an ethical, secure and easy way of sharing sensitive tools and resources has not been developed and adopted in the field. More effective dataset sharing would (1) greater collaboration amongst researchers, (2) enhance the reproducibility of research by encouraging greater scrutiny BIBREF100, BIBREF101, BIBREF102 and (3) substantively advance the field by enabling future researchers to better understand the biases and limitations of existing research and to identify new research directions.
There are two main challenges which must be overcome to ensure that training datasets can be shared and used by future researchers. First, dataset quality: the size, class distribution and quality of their content must be maintained. Second, dataset access: access to datasets must be controlled so that researchers can use them, whilst respecting platforms' Terms of Service and not allowing potential extremists from having access. These problems are closely entwined and the solutions available, which follow, have implications for both of them.
Synthetic datasets. Four of the datasets we have reviewed were developed synthetically. This resolves the dataset quality problem but introduces additional biases and limitations because the data is not real. Synthetic datasets still need to be shared in such a way as to limit access for potential extremists but face no challenges from Platforms' Terms of Services.
Data `philanthropy' or `donations'. These are defined as `the act of an individual actively consenting to donate their personal data for research' BIBREF97. Donated data from many individuals could then be combined and shared – but it would still need to be annotated. A further challenge is that many individuals who share abusive content may be unwilling to `donate' their data as this is commonly associated with prosocial motivations, creating severe class imbalances BIBREF97. Data donations could also open new moral and ethical issues; individuals' privacy could be impacted if data is re-analysed to derive new unexpected insights BIBREF103. Informed consent is difficult given that the exact nature of analyses may not be known in advance. Finally, data donations alone do not solve how access can be responsibly protected and how platforms' Terms of Service can be met. For these reasons, data donations are unlikely to be a key part of future research infrastructure for abusive content detection.
Platform-backed sharing. Platforms could share datasets and support researchers' access. There are no working examples of this in abusive content detection research, but it has been successfully used in other research areas. For instance, Twitter has made a large dataset of accounts linked to potential information operations, known as the “IRA" dataset (Internet Research Agency). This would require considerably more interfaces between academia and industry, which may be difficult given the challenges associated with existing initiatives, such as Social Science One. However, in the long term, we propose that this is the most effective solution for the problem of sharing training datasets. Not only because it removes Terms of Service limitations but also because platforms have large volumes of original content which has been annotated in a detailed way. This could take one of two forms: platforms either make content which has violated their Community Guidelines available directly or they provide special access post-hoc to datasets which researchers have collected publicly through their API - thereby making sure that datasets do not degrade over time.
Data trusts. Data trusts have been described as a way of sharing data `in a fair, safe and equitable way' ( BIBREF104 p. 46). However, there is considerable disagreement as to what they entail and how they would operate in practice BIBREF105. The Open Data Institute identifies that data trusts aim to make data open and accessible by providing a framework for storing and accessing data, terms and mechanisms for resolving disputes and, in some cases, contracts to enforce them. For abusive content training datasets, this would provide a way of enabling datasets to be shared, although it would require considerable institutional, legal and financial commitments.
Arguably, the easiest way of ensuring data can be shared is to maintain a very simple data trust, such as a database, which would contain all available abusive content training datasets. This repository would need to be permissioned and access controlled to address concerns relating to privacy and ethics. Such a repository could substantially reduce the burden on researchers; once they have been approved to the repository, they could access all datasets publicly available – different levels of permission could be implemented for different datasets, depending on commercial or research sensitivity. Furthermore, this repository could contain all of the metadata reported with datasets and such information could be included at the point of deposit, based on the `data statements' work of Bender and Friedman BIBREF18. A simple API could be developed for depositing and reading data, similar to that of the HateBase. The permissioning system could be maintained either through a single institution or, to avoid power concentrating amongst a small group of researchers, through a decentralised blockchain.
<<</Research infrastructure: Solutions for sharing training datasets>>>
<<<A new repository of training datasets: Hatespeechdata.com>>>
The resources and infrastructure to create a dedicated data trust and API for sharing abusive content training datasets is substantial and requires considerable further engagement with research teams in this field. In the interim, to encourage greater sharing of datasets, we have launched a dedicated website which contains all of the datasets analysed here: https://hatespeechdata.com. Based on the analysis in the previous sections, we have also provided partial data statements BIBREF18. The website also contains previously published abusive keyword dictionaries, which are not analysed here but some researchers may find useful. Note that the website only contains information/data which the original authors have already made publicly available elsewhere. It will be updated with new datasets in the future.
<<</A new repository of training datasets: Hatespeechdata.com>>>
<<</Dataset sharing>>>
<<<Best Practices for training dataset creation>>>
Much can be learned from existing efforts to create abusive language datasets. We identify best practices which emerge at four distinct points in the process of creating a training dataset: (1) task formation, (2) data selection, (3) annotation, and (4) documentation.
<<<Task formation: Defining the task addressed by the dataset>>>
Dataset creation should be `problem driven' BIBREF106 and should address a well-defined and specific task, with a clear motivation. This will directly inform the taxonomy design, which should be well-specified and engage with social scientific theory as needed. Defining a clear task which the dataset addresses is especially important given the maturation of the field, ongoing terminological disagreement and the complexity of online abuse. The diversity of phenomena that fits under the umbrella of abusive language means that `general purpose' datasets are unlikely to advance the field. New datasets are most valuable when they address a new target, generator, phenomenon, or domain. Creating datasets which repeat existing work is not nearly as valuable.
<<</Task formation: Defining the task addressed by the dataset>>>
<<<Selecting data for abusive language annotation>>>
Once the task is established, dataset creators should select what language will be annotated, where data will be sampled from and how sampling will be completed. Any data selection exercise is bound to give bias, and so it is important to record what decisions are made (and why) in this step. Dataset builders should have a specific target size in mind and also have an idea of the minimum amount of data this is likely to be needed for the task. This is also where steps 1 and 2 intersect: the data selection should be driven by the problem that is addressed rather than what is easy to collect. Ensuring there are enough positive examples of abuse will always be challenging as the prevalence of abuse is so low. However, given that purposive sampling inevitably introduces biases, creators should explore a range of options before determining the best one – and consider using multiple sampling methods at once, such as including data from different times, different locations, different types of users and different platforms. Other options include using measures of linguistic diversity to maximize the variety of text included in datasets, or including words that cluster close to known abusive terms.
<<</Selecting data for abusive language annotation>>>
<<<Annotating abusive language>>>
Annotators must be hired, trained and given appropriate guidelines. Annotators work best with solid guidelines, that are easy to grasp and have clear examples BIBREF107. The best examples are both illustrative, in order to capture the concepts (such as `threatening language') and provide insight into `edge cases', which is content that only just crosses the line into abuse. Decisions should be made about how to handle intrinsically difficult aspects of abuse, such as irony, calumniation and intent (see above). Annotation guidelines should be developed iteratively by dataset creators; by working through the data, rules can be established for difficult or counter-intuitive coding decisions, and a set of shared practices developed. Annotators should be included in this iterative process. Discussions with annotators the language that they have seen “in the field" offers an opportunity to enhance and refine guidelines - and even taxonomies. Such discussions will lead to more consistent data and provide a knowledge base to draw on for future work. To achieve this, it is important to adopt an open culture where annotators are comfortable providing open feedback and also describing their uncertainties. Annotators should also be given emotional and practical support (as well as appropriate financial compensation), and the harmful and potentially triggering effects of annotating online abuse should be recognised at all times. For a set of guidelines to help protect the well-being of annotators, see BIBREF13.
<<</Annotating abusive language>>>
<<<Documenting methods, data, and annotators>>>
The best training datasets provide as much information as possible and are well-documented. When the method behind them is unclear, they are hard to evaluate, use and build on. Providing as much information as possible can open new and unanticipated analyses and gives more agency to future researchers who use the dataset to create classifiers. For instance, if all annotators' codings are provided (rather than just the `final' decision) then a more nuanced and aware classifier could be developed as, in some cases, it can be better to maximise recall of annotations rather than maximise agreement BIBREF77.
Our review found that most datasets have poor methodological descriptions and few (if any) provide enough information to construct an adequate data statement. It is crucial that dataset creators are up front about their biases and limitations: every dataset is biased, and this is only problematic when the biases are unknown. One strategy for doing this is to maintain a document of decisions made when designing and creating the dataset and to then use it to describe to readers the rationale behind decisions. Details about the end-to-end dataset creation process are welcomed. For instance, if the task is crowdsourced then a screenshot of the micro-task presented to workers should be included, and the top-level parameters should be described (e.g. number of workers, maximum number of tasks per worker, number of annotations per piece of text) BIBREF20. If a dedicated interface is used for the annotation, this should also be described and screenshotted as the interface design can influence the annotations.
<<</Documenting methods, data, and annotators>>>
<<<Best practice summary>>>
Unfortunately, as with any burgeoning field, there is confusion and overlap around many of the phenomena discussed in this paper; coupled with the high degree of variation in the quality of method descriptions, it has lead to many pieces of research that are hard to combine, compare, or re-use. Our reflections on best practices are driven by this review and the difficulties of creating high quality training datasets. For future researchers, we summarise our recommendations in the following seven points:
Bear in mind the purpose of the dataset; design the dataset to help address questions and problems from previous research.
Avoid using `easy to access' data, and instead explore new sources which may have greater diversity. Consider what biases may be created by your sampling method.
Determine size based on data sparsity and having enough positive classes rather than `what is possible'.
Establish a clear taxonomy to be used for the task, with meaningful and theoretically sound categories.
Provide annotators with guidelines; develop them iteratively and publish them with your dataset. Consider using trained annotators given the complexities of abusive content.
Involve people who have direct experience of the abuse which you are studying whenever possible (and provided that you can protect their well-being).
Report on every step of the research through a Data Statement.
<<</Best practice summary>>>
<<</Best Practices for training dataset creation>>>
<<<Conclusion>>>
This paper examined a large set of datasets for the creation of abusive content detection systems, providing insight into what they contain, how they are annotated, and how tasks have been framed. Based on an evidence-driven review, we provided an extended discussion of how to make training datasets more readily available and useful, including the challenges and opportunities of open science as well as the need for more research infrastructure. We reported on the development of hatespeechdata.com – a new repository for online abusive content training datasets. Finally, we outlined best practices for creation of training datasets for detection of online abuse. We have effectively met the four research aims elaborated at the start of the paper.
Training detection systems for online abuse is a substantial challenge with real social consequences. If we want the systems we develop to be useable, scalable and with few biases then we need to train them on the right data: garbage in will only lead to garbage out.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Introduction, Background"
],
"type": "disordered_section"
}
|
2004.01670
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Directions in Abusive Language Training Data: Garbage In, Garbage Out
<<<Abstract>>>
Data-driven analysis and detection of abusive online content covers many different tasks, phenomena, contexts, and methodologies. This paper systematically reviews abusive language dataset creation and content in conjunction with an open website for cataloguing abusive language data. This collection of knowledge leads to a synthesis providing evidence-based recommendations for practitioners working with this complex and highly diverse data.
<<</Abstract>>>
<<<Introduction>>>
Abusive online content, such as hate speech and harassment, has received substantial attention over the past few years for its malign social effects. Left unchallenged, abusive content risks harminng those who are targeted, toxifying public discourse, exacerbating social tensions and could lead to the exclusion of some groups from public spaces. As such, systems which can accurately detect and classify online abuse at scale, in real-time and without bias are of central interest to tech companies, policymakers and academics.
Most detection systems rely on having the right training dataset, reflecting one of the most widely accepted mantras in computer science: Garbage In, Garbage Out. Put simply: to have systems which can detect and classify abusive online content effectively, one needs appropriate datasets with which to train them. However, creating training datasets is often a laborious and non-trivial task – and creating datasets which are non-biased, large and theoretically-informed is even more difficult (BIBREF0 p. 189). We address this issue by examining and reviewing publicly available datasets for abusive content detection, which we provide access to on a new dedicated website, hatespeechdata.com.
In the first section, we examine previous reviews and present the four research aims which guide this paper. In the second section, we conduct a critical and in-depth analysis of the available datasets, discussing first what their aim is, how tasks have been described and what taxonomies have been constructed and then, second, what they contain and how they were annotated. In the third section, we discuss the challenges of open science in this research area and elaborates different ways of sharing training datasets, including the website hatespeechdata.com In the final section, we draw on our findings to establish best practices when creating datasets for abusive content detection.
<<</Introduction>>>
<<<Background>>>
The volume of research examining the social and computational aspects of abusive content detection has expanded prodigiously in the past five years. This has been driven by growing awareness of the importance of the Internet more broadly BIBREF1, greater recognition of the harms caused by online abuse BIBREF2, and policy and regulatory developments, such as the EU's Code of Conduct on Hate, the UK Government's `Online Harms' white paper BIBREF3, Germany's NetzDG laws, the Public Pledge on Self-Discipline for the Chinese Internet Industry, and France's anti-hate regulation BIBREF2. In 2020 alone, three computer science venues will host workshops on online hate (TRAC and STOC at LREC, and WOAH at EMNLP), and a shared task at 2019's SemEval on online abuse detection reports that 800 teams downloaded the training data and 115 submitted detection systems BIBREF4. At the same time, social scientific interventions have also appeared, deepening our understanding of how online abuse spreads BIBREF5 and how its harmful impact can be mitigated and challenged BIBREF6.
All analyses of online abuse ultimately rely on a way of measuring it, which increasingly means having a method which can handle the sheer volume of content produced, shared and engaged with online. Traditional qualitative methods cannot scale to handle the hundreds of millions of posts which appear on each major social media platform every day, and can also introduce inconsistencies and biase BIBREF7. Computational tools have emerged as the most promising way of classifying and detecting online abuse, drawing on work in machine learning, Natural Language Processing (NLP) and statistical modelling. Increasingly sophisticated architectures, features and processes have been used to detect and classify online abuse, leveraging technically sophisticated methods, such as contextual word embeddings, graph embeddings and dependency parsing. Despite their many differences BIBREF8, nearly all methods of online abuse detection rely on a training dataset, which is used to teach the system what is and is not abuse. However, there is a lacuna of research on this crucial aspect of the machine learning process. Indeed, although several general reviews of the field have been conducted, no previous research has reviewed training datasets for abusive content detection in sufficient breadth or depth. This is surprising given (i) their fundamental importance in the detection of online abuse and (ii) growing awareness that several existing datasets suffer from many flaws BIBREF9, BIBREF10. Close relevant work includes:
Schmidt and Wiegand conduct a comprehensive review of research into the detection and classification of abusive online content. They discuss training datasets, stating that `to perform experiments on hate speech detection, access to labelled corpora is essential' (BIBREF8, p. 7), and briefly discuss the sources and size of the most prominent existing training datasets, as well as how datasets are sampled and annotated. Schmidt and Wiegand identify two key challenges with existing datasets. First, `data sparsity': many training datasets are small and lack linguistic variety. Second, metadata (such as how data was sampled) is crucial as it lets future researchers understand unintended biases, but is often not adequately reported (BIBREF8, p. 6).
Waseem et al.BIBREF11 outline a typology of detection tasks, based on a two-by-two matrix of (i) identity- versus person- directed abuse and (ii) explicit versus implicit abuse. They emphasise the importance of high-quality datasets, particularly for more nuanced expressions of abuse: `Without high quality labelled data to learn these representations, it may be difficult for researchers to come up with models of syntactic structure that can help to identify implicit abuse.' (BIBREF11, p. 81)
Jurgens et al.BIBREF12 discuss also conduct a critical review of hate speech detection, and note that `labelled ground truth data for building and evaluating classifiers is hard to obtain because platforms typically do not share moderated content due to privacy, ethical and public relations concerns.' (BIBREF12, p. 3661) They argue that the field needs to `address the data scarcity faced by abuse detection research' in order to better address more complex rsearch issues and pressing social challenges, such as `develop[ing] proactive technologies that counter or inhibit abuse before it harms' (BIBREF12, pp. 3658, 3661).
Vidgen et al. describe several limitations with existing training datasets for abusive content, most noticeably how `they contain systematic biases towards certain types and targets of abuse.' BIBREF13[p.2]. They describe three issues in the quality of datasets: degradation (whereby datasets decline in quality over time), annotation (whereby annotators often have low agreement, indicating considerable uncertainty in class assignments) and variety (whereby `The quality, size and class balance of datasets varies considerably.' [p. 6]).
Chetty and AlathurBIBREF14 review the use of Internet-based technologies and online social networks to study the spread of hateful, offensive and extremist content BIBREF14. Their review covers both computational and legal/social scientific aspects of hate speech detection, and outlines the importance of distinguishing between different types of group-directed prejudice. However, they do not consider training datasets in any depth.
Fortuna and NunesBIBREF15 provide an end-to-end review of hate speech research, including the motivations for studying online hate, definitional challenges, dataset creation/sharing, and technical advances, both in terms of feature selection and algorithmic architecture (BIBREF15, 2018). They delineate between different types of online abuse, including hate, cyberbullying, discrimination and flaming, and add much needed clarity to the field. They show that (1) dataset size varies considerably but they are generally small (mostly containing fewer than 10,000 entries), (2) Twitter is the most widely-studied platform, and (3) most papers research hate speech per se (i.e. without specifying a target). Of those which do specify a target, racism and sexism are the most researched. However, their review focuses on publications rather than datasets: the same dataset might be used in multiple studies, limiting the relevance of their review for understanding the intrinsic role of training datasets. They also only engage with datasets fairly briefly, as part of a much broader review.
Several classification papers also discuss the most widely used datasets, including Davidson et al. BIBREF16 who describe five datasets, and Salminen et al. who review 17 datasets and describe four in detail BIBREF17.
This paper addresses this lacuna in existing research, providing a systematic review of available training datasets for online abuse. To provide structure to this review, we adopt the `data statements' framework put forward by Bender and Friedman BIBREF18, as well as other work providing frameworks, schema and processes for analysing NLP artefacts BIBREF19, BIBREF20, BIBREF21. Data statements are a way of documenting the decisions which underpin the creation of datasets used for Natural Language Processing (NLP). They formalise how decisions should be documented, not only ensuring scientific integrity but also addressing `the open and urgent question of how we integrate ethical considerations in the everyday practice of our field' (BIBREF18, p. 587). In many cases, we find that it is not possible to fully recreate the level of detail recorded in an original data statement from how datasets are described in publications. This reinforces the importance of proper documentation at the point of dataset creation.
As the field of online abusive content detection matures, it has started to tackle more complex research challenges, such as multi-platform, multi-lingual and multi-target abuse detection, and systems are increasingly being deployed in `the wild' for social scientific analyses and for content moderation BIBREF5. Such research heightens the focus on training datasets as exactly what is being detected comes under greater scrutiny. To enhance our understanding of this domain, our review paper has four research aims.
Research Aim One: to provide an in-depth and critical analysis of the available training datasets for abusive online content detection.
Research Aim Two: to map and discuss ways of addressing the lack of dataset sharing, and as such the lack of `open science', in the field of online abuse research.
Research Aim Three: to introduce the website hatespeechdata.com, as a way of enabling more dataset sharing.
Research Aim Four: to identify best practices for creating an abusive content training dataset.
<<</Background>>>
<<<Analysis of training datasets>>>
Relevant publications have been identified from four sources to identify training datasets for abusive content detection:
The Scopus database of academic publications, identified using keyword searches.
The ACL Anthology database of NLP research papers, identified using keyword searches.
The ArXiv database of preprints, identified using keyword searches.
Proceedings of the 1st, 2nd and 3rd workshops on abusive language online (ACL).
Most publications report on the creation of one abusive content training dataset. However, some describe several new datasets simultaneously or provide one dataset with several distinct subsets of data BIBREF22, BIBREF23, BIBREF24, BIBREF25. For consistency, we separate out each subset of data where they are in different languages or the data is collected from different platforms. As such, the number of datasets is greater than the number publications. All of the datasets were released between 2016 and 2019, as shown in Figure FIGREF17.
<<<The purpose of training datasets>>>
<<<Problems addressed by datasets>>>
Creating a training dataset for online abuse detection is typically motivated by the desire to address a particular social problem. These motivations can inform how a taxonomy of abusive language is designed, how data is collected and what instructions are given to annotators. We identify the following motivating reasons, which were explicitly referenced by dataset creators.
Reducing harm: Aggressive, derogatory and demeaning online interactions can inflict harm on individuals who are targeted by such content and those who are not targeted but still observe it. This has been shown to have profound long-term consequences on individuals' well-being, with some vulnerable individuals expressing concerns about leaving their homes following experiences of abuse BIBREF26. Accordingly, many dataset creators state that aggressive language and online harassment is a social problem which they want to help address
Removing illegal content: Many countries legislate against certain forms of speech, e.g. direct threats of violence. For instance, the EU's Code of Conduct requires that all content that is flagged for being illegal online hate speech is reviewed within 24 hours, and removed if necessary BIBREF27. Many large social media platforms and tech companies adhere to this code of conduct (including Facebook, Google and Twitter) and, as of September 2019, 89% of such content is reviewed in 24 hours BIBREF28. However, we note that in most cases the abuse that is marked up in training datasets falls short of the requirements of illegal online hate – indeed, as most datasets are taken from public API access points, the data has usually already been moderated by the platforms and most illegal content removed.
Improving health of online conversations: The health of online communities can be severely affected by abusive language. It can fracture communities, exacerbate tensions and even repel users. This is not only bad for the community and for civic discourse in general, it also negatively impacts engagement and thus the revenue of the host platforms. Therefore, there is a growing impetus to improve user experience and ensure online dialogues are healthy, inclusive and respectful where possible. There is ample scope for improvement: a study showed that 82% of personal attacks on Wikipedia against other editors are not addressed BIBREF29. Taking steps to improve the health of exchanges in online communities will also benefit commercial and voluntary content moderators. They are routinely exposed to such content, often with insufficient safeugards, and sometimes display symptoms similar to those of PTSD BIBREF30. Automatic tools could help to lessen this exposure, reducing the burden on moderators.
<<</Problems addressed by datasets>>>
<<<Uses of datasets: How detection tasks are defined>>>
Myriad tasks have been addressed in the field of abusive online content detection, reflecting the different disciplines, motivations and assumptions behind research. This has led to considerable variation in what is actually detected under the rubric of `abusive content', and establishing a degree of order over the diverse categorisations and subcategorisations is both difficult and somewhat arbitrary. Key dimensions which dataset creators have used to categorise detection tasks include who/what is targeted (e.g. groups vs. individuals), the strength of content (e.g. covert vs. overt), the nature of the abuse (e.g. benevolent vs. hostile sexism BIBREF31), how the abuse manifests (e.g. threats vs. derogatory statements), the tone (e.g. aggressive vs. non-aggressive), the specific target (e.g. ethnic minorities vs. women),and the subjective perception of the reader (e.g. disrespectful vs. respectful). Other important dimensions include the theme used to express abuse (e.g. Islamophobia which relies on tropes about terrorism vs. tropes about sexism) and the use of particular linguistic devices, such as appeals to authority, sincerity and irony. All of these dimensions can be combined in different ways, producing a large number of intersecting tasks.
Consistency in how tasks are described will not necessarily ensure that datasets can be used interchangeably. From the description of a task, an annotation framework must be developed which converts the conceptualisation of abuse into a set of standards. This formalised representation of the `abuse' inevitably involves shortcuts, imperfect rules and simplifications. If annotation frameworks are developed and applied differently, then even datasets aimed at the same task can still vary considerably. Nonetheless, how detection tasks for online abuse are described is crucial for how the datasets – and in turn the systems trained on them – can subsequently be used. For example, a dataset annotated for hate speech can be used to examine bigoted biases, but the reverse is not true. How datasets are framed also impacts whether, and how, datasets can be combined to form large `mega-datasets' – a potentially promising avenue for overcoming data sparsity BIBREF17.
In the remainder of this section, we provide a framework for splitting out detection tasks along the two most salient dimensions: (1) the nature of abuse and (2) the granularity of the taxonomy.
<<<Detection tasks: the nature of abuse>>>
This refers to what is targeted/attacked by the content and, subsequently, how the taxonomy has been designed/framed by the dataset creators. The most well-established taxonomic distinction in this regard is the difference between (i) the detection of interpersonal abuse, and (ii) the detection of group-directed abuse BIBREF11). Other authors have sought to deductively theorise additional categories, such as `concept-directed' abuse, although these have not been widely adopted BIBREF13. Through an inductive investigation of existing training datasets, we extend this binary distinction to four primary categories of abuse which have been studied in previous work, as well as a fifth `Mixed' category.
Person-directed abuse. Content which directs negativity against individuals, typically through aggression, insults, intimidation, hostility and trolling, amongst other tactics. Most research falls under the auspices of `cyber bullying', `harassment' and `trolling' BIBREF23, BIBREF32, BIBREF33. One major dataset of English Wikipedia editor comments BIBREF29 focuses on the `personal attack' element of harassment, drawing on prior investigations that mapped out harassment in that community. Another widely used dataset focuses on trolls' intent to intimidate, distinguishing between direct harassment and other behaviours BIBREF34. An important consideration in studies of person-directed abuse is (a) interpersonal relations, such as whether individuals engage in patterns of abuse or one-off acts and whether they are known to each other in the `real' world (both of which are a key concern in studies of cyberbullying) and (b) standpoint, such as whether individuals directly engage in abuse themselves or encourage others to do so. For example, the theoretically sophisticated synthetic dataset provided by BIBREF33 identifies not only harassment but also encouragement to harassment. BIBREF22 mark up posts from computer game forums (World of Warcraft and League of Legends) for cyberbullying and annotate these as $\langle $offender, victim, message$\rangle $ tuples.
Group-directed abuse. Content which directs negativity against a social identity, which is defined in relation to a particular attribute (e.g. ethnic, racial, religious groups)BIBREF35. Such abuse is often directed against marginalised or under-represented groups in society. Group-directed abuse is typically described as `hate speech' and includes use of dehumanising language, making derogatory, demonising or hostile statements, making threats, and inciting others to engage in violence, amongst other dangerous communications. Common examples of group-directed abuse include sexism, which is included in datasets provided by BIBREF36, BIBREF37, BIBREF38, BIBREF39, BIBREF33 and racism, which is directly targeted in BIBREF36, BIBREF40. In some cases, specific types of group-directed abuse are subsumed within a broader category of identity-directed abuse, as in BIBREF41, BIBREF42, BIBREF4. Determining the limits of any group-directed abuse category requires careful theoretical reflection, as with the decision to include ethnic, caste-based and certain religious prejudices under `racism'. There is no `right' answer to such questions as they engage with ontological concerns about identification and `being' and the politics of categorization.
Flagged content. Content which is reported by community members or assessed by community and professional content moderators. This covers a broad range of focuses as moderators may also remove spam, sexually inappropriate content and other undesirable contributions. In this regard, `flagged' content is akin to the concept of `trolling', which covers a wide range of behaviours, from jokes and playful interventions through to sinister personal attacks such as doxxing BIBREF43. Some forms of trolling can be measured with tools such as the Global Assessment of Internet Trolling (GAIT) BIBREF43.
Incivility. Content which is considered to be incivil, rude, inappropriate, offensive or disrespectful BIBREF24, BIBREF25, BIBREF44. Such categories are usually defined with reference to the tone that the author adopts rather than the substantive content of what they express, which is the basis of person- and group- directed categories. Such content usually contains obscene, profane or otherwise `dirty' words. This can be easier to detect as closed-class lists are effective at identifying single objectionable words (e.g. BIBREF45). However, one concern with this type of research is that the presence of `dirty' words does not necessarily signal malicious intent or abuse; they may equally be used as intensifiers or colloquialisms BIBREF46. At the same time, detecting incivility can be more difficult as it requires annotators to infer the subjective intent of the speaker or to understand (or guess) the social norms of a setting and thus whether disrespect has been expressed BIBREF42. Content can be incivil without directing hate against a group or person, and can be inappropriate in one setting but not another: as such it tends to be more subjective and contextual than other types of abusive language.
Mixed. Content which contains multiple types of abuse, usually a combination of the four categories discussed above. The intersecting nature of online language means that this is common but can also manifest in unexpected ways. For instance, female politicians may receive more interpersonal abuse than other politicians. This might not appear as misogyny because their identity as women is not referenced – but it might have motivated the abuse they were subjected to. Mixed forms of abuse require further research, and have thus far been most fully explored in the OLID dataset provided by BIBREF4, who explore several facets of abuse under one taxonomy.
<<</Detection tasks: the nature of abuse>>>
<<<Detection tasks: Granularity of taxonomies>>>
This refers to how much detail a taxonomy contains, reflected in the number of unique classes. The most important and widespread distinction is whether a binary class is used (e.g. Hate / Not) or a multi-level class, such as a tripartite split (typically, Overt, Covert and Non-abusive). In some cases, a large number of complex classes are created, such as by combining whether the abuse is targeted or not along with its theme and strength.
In general, Social scientific analyses encourage creating a detailed taxonomy with a large number of fine-grained categories. However, this is only useful for machine learning if there are enough data points in each category and if annotators are capable of consistently distinguishing between them. Complex annotation schemas may not result in better training datasets if they are not implemented in a robust way. As such, it is unsurprising that binary classification schemas are the most prevalent, even though they are arguably the least useful given the variety of ways in which abuse can be articulated. This can range from the explicit and overt (e.g. directing threats against a group) to more subtle behaviours, such as micro-aggressions and dismissing marginalised groups' experiences of prejudice. Subsuming both types of behaviour within one category not only risks making detection difficult (due to considerable in-class variation) but also leads to a detection system which cannot make important distinctions between qualitatively different types of content. This has severe implications for whether detection systems trained on such datasets can actually be used for downstream tasks, such as content moderation and social scientific analysis.
Drawing together the nature and granularity of abuse, our analyses identify a hierarchy of taxonomic granularity from least to most granular:
Binary classification of a single `meta' category, such as hate/not or abuse/not. This can lead to very general and vague research, which is difficult to apply in practice.
Binary classification of a single type of abuse, such as person-directed or group-directed. This can be problematic given that abuse is nearly always directed against a group rather than `groups' per se.
Binary classification of abuse against a single well-defined group, such as racism/not or Islamophobia/not, or interpersonal abuse against a well-defined cohort, such as MPs and young people.
Multi-class (or multi-label) classification of different types of abuse, such as:
Multiple targets (e.g. racist, sexist and non-hateful content) or
Multiple strengths (e.g. none, implicit and explicit content).
Multiple types (e.g. threats versus derogatory statements or benevolent versus hostile statements).
Multi-class classification of different types of abuse which is integrated with other dimensions of abuse.
<<</Detection tasks: Granularity of taxonomies>>>
<<</Uses of datasets: How detection tasks are defined>>>
<<</The purpose of training datasets>>>
<<<The content of training datasets>>>
<<<The `Level' of content>>>
49 of the training datasets are annotated at the level of the post, one dataset is annotated at the level of the user BIBREF47, and none of them are annotated at the level of the comment thread. Only two publications indicate that the entire conversational thread was presented to annotators when marking up individual entries, meaning that in most cases this important contextual information is not used. 49 of the training datasets contain only text. This is a considerable limitation of existing research BIBREF13, especially given the multimodal nature of online communication and the increasing ubiquity of digital-specific image-based forms of communication such as Memes, Gifs, Filters and Snaps BIBREF48. Although some work has addressed the task of detecting hateful images BIBREF49, BIBREF50, this lead to the creation of a publically available labelled training dataset in only one case BIBREF51. To our knowledge, no research has tackled the problem of detecting hateful audio content. This is a distinct challenge; alongside the semantic content audio also contains important vocal cues which provide more opportunities to investigate (but also potentially misinterpret) tone and intention.
<<</The `Level' of content>>>
<<<Language>>>
The most common language in the training datasets is English, which appears in 20 datasets, followed by Arabic and Italian (5 datasets each), Hindi-English (4 datasets) and then German, Indonesian and Spanish (3 datasets). Noticeably, several major languages, both globally and in Europe, do not appear, which suggests considerable unevenness in the linguistic and cultural focuses of abusive language detection. For instance, there are major gaps in the coverage of European languages, including Danish and Dutch. Surprisingly, French only appears once. The dominance of English may be due to how we sampled publications (for which we used English terms), but may also reflect different publishing practices in different countries and how well-developed abusive content research is.
<<</Language>>>
<<<Source of data>>>
Training datasets use data collected from a range of online spaces, including from mainstream platforms, such as Twitter, Wikipedia and Facebook, to more niche forums, such as World of Warcraft and Stormfront. In most cases, data is collected from public sources and then manually annotated but in others data is sourced through proprietary data sharing agreements with host platforms. Unsurprisingly, Twitter is the most widely used source of data, accounting for 27 of the datasets. This reflects wider concerns in computational social research that Twitter is over-used, primarily because it has a very accessible API for data collection BIBREF52, BIBREF53. Facebook and Wikipedia are the second most used sources of data, accounting for three datasets each – although we note that all three Wikipedia datasets are reported in the same publication. Many of the most widely used online platforms are not represented at all, or only in one dataset, such as Reddit, Weibo, VK and YouTube.
The lack of diversity in where data is collected from limits the development of detection systems. Three main issues emerge:
Linguistic practices vary across platforms. Twitter only allows 280 characters (previously only 140), provoking stylistic changes BIBREF54, and abusive content detection systems trained on this data are unlikely to work as well with longer pieces of text. Dealing with longer pieces of text could necessitate different classification systems, potentially affecting the choice of algorithmic architecture. Additionally, the technical affordances of platforms may affect the style, tone and topic of the content they host.
The demographics of users on different platforms vary considerably. Social science research indicates that `digital divides' exist, whereby online users are not representative of wider populations and differ across different online spaces BIBREF53, BIBREF55, BIBREF56. Blank draws attention to how Twitter users are usually younger and wealthier than offline populations; over reliance on data from Twitter means, in effect, that we are over-sampling data from this privileged section of society. Blank also shows that there are also important cross-national differences: British Twitters are better-educated than the offline British population but the same is not true for American Twitter users compared with the offline American population BIBREF56. These demographic differences are likely to affect the types of content that users produce.
Platforms have different norms and so host different types and amounts of abuse. Mainstream platforms have made efforts in recent times to `clean up' content and so the most overt and aggressive forms of abuse, such as direct threats, are likely to be taken down BIBREF57. However, more niche platforms, such as Gab or 4chan, tolerate more offensive forms of speech and are more likely to contain explicit abuse, such as racism and very intrusive forms of harassment, such as `doxxing' BIBREF58, BIBREF59, BIBREF60. Over-reliance on a few sources of data could mean that datasets are biased towards only a subset of types of abuse.
<<</Source of data>>>
<<<Size>>>
The size of the training datasets varies considerably from 469 posts to 17 million; a difference of four orders of magnitude. Differences in size partly reflect different annotation approaches. The largest datasets are from proprietary data sharing agreements with platforms. Smaller datasets tend to be carefully collected and then manually annotated. There are no established guidelines for how large an abusive language training dataset needs to be. However, smaller datasets are problematic because they contain too little linguistic variation and increase the likelihood of overfitting. Rizoiu et al.BIBREF61 train detection models on only a proportion of the Davidson et al. and Waseem training datasets and show that this leads to worse performance, with a lower F1-Score, particularly for `data hungry' deep learning approaches BIBREF61. At the same time, `big' datasets alone are not a panacea for the challenges of abusive content classification. Large training datasets which have been poorly sampled, annotated with theoretically problematic categories or inexpertly and unthoughtfully annotated, could still lead to the development of poor classification systems.
The challenges posed by small datasets could potentially be overcome through machine learning techniques such as `semi-supervised' and `active' learning BIBREF62, although these have only been limitedly applied to abusive content detection so far BIBREF63. Sharifirad et al. propose using text augmentation and new text generation as a way of overcoming small datasets, which is a promising avenue for future research BIBREF64.
<<</Size>>>
<<<Class distribution and sampling>>>
Class distribution is an important, although often under-considered, aspect of the design of training datasets. Datasets with little abusive content will lack linguistic variation in terms of what is abusive, thereby increasing the risk of overfitting. More concerningly, the class distribution directly affects the nature of the engineering task and how performance should be evaluated. For instance, if a dataset is 70% hate speech then a zero-rule classification system (i.e. where everything is categorised as hate speech) will achieve 70% precision and 100% recall. This should be used as a baseline for evaluating performance: 80% precision is less impressive compared with this baseline. However, 80% precision on an evenly balanced dataset would be impressive. This is particularly important when evaluating the performance of ternary classifiers, when classes can be considerably imbalanced.
On average, 35% of the content in the training datasets is abusive. However, class distributions vary considerably, from those with just 1% abusive content up to 100%. These differences are largely a product of how data is sampled and which platform it is taken from. Bretschneider BIBREF22 created two datasets without using purposive sampling, and as such they contain very low levels of abuse ( 1%). Other studies filter data collection based on platforms, time periods, keywords/hashtags and individuals to increase the prevalence of abuse. Four datasets comprise only abusive content; three cases are synthetic datasets, reported on in one publication BIBREF65, and in the other case the dataset is an amendment to an existing dataset and only contains misogynistic content BIBREF37.
Purposive sampling has been criticised for introducing various forms of bias into datasets BIBREF66, such as missing out on mis-spelled content BIBREF67 and only focusing on the linguistic patterns of an atypical subset of users. One pressing risk is that a lot of data is sampled from far right communities – which means that most hate speech classifiers implicitly pick up on right wing styles of discourse rather than hate speech per se. This could have profound consequences for our understanding of online political dialogue if the classifiers are applied uncritically to other groups. Nevertheless, purposive sampling is arguably a necessary step when creating a training dataset given the low prevalence of abuse on social media in general BIBREF68.
<<</Class distribution and sampling>>>
<<<Identity of the content creators>>>
The identity of the users who originally created the content in training datasets is described in only two cases. In both cases the data is synthetic BIBREF65, BIBREF33. Chung et al. use `nichesourcing' to synthetically generate abuse, with experts in tackling hate speech creating hateful posts. Sprugnoli et al. ask children to adopt pre-defined roles in an experimental classroom setup, and ask them to engage in a cyberbullying scenario. In most of the non-synthetic training datasets, some information is given about the sampling criteria used to collect data, such as hashtags. However, this does not provide direct insight into who the content creators are, such as their identity, demographics, online behavioural patterns and affiliations.
Providing more information about content creators may help address biases in existing datasets. For instance, Wiegand et al. show that 70% of the sexist tweets in the highly cited Waseem and Hovy dataset BIBREF36 come from two content creators and that 99% of the racist tweets come from just one BIBREF66. This is a serious constraint as it means that user-level metadata is artificially highly predictive of abuse. And, even when user-level metadata is not explicitly modelled, detection systems only need to pick up on the linguistic patterns of a few authors to nominally detect abuse. Overall, the complete lack of information about which users have created the content in most training datasets is a substantial limitation which may be driving as-yet-unrecognised biases. This can be remedied through the methodological rigour implicit in including a data statement with a corpus.
<<</Identity of the content creators>>>
<<</The content of training datasets>>>
<<<Annotation of training datasets>>>
<<<Annotation process>>>
How training datasets are annotated is one of the most important aspects of their creation. A range of annotation processes are used in training datasets, which we split into five high-level categories:
Crowdsourcing (15 datasets). Crowdsourcing is widely used in NLP research because it is relatively cheap and easy to implement. The value of crowdsourcing lies in having annotations undertaken by `a large number of non-experts' (BIBREF69, p. 278) – any bit of content can be annotated by multiple annotators, effectively trading quality for quantity. Studies which use crowdsourcing with only a few annotators for each bit of content risk minimising quality without counterbalancing it with greater quantity. Furthermore, testing the work of many different annotators can be challenging BIBREF70, BIBREF71 and ensuring they are paid an ethical amount may make the cost comparable to using trained experts. Crowdsourcing has also been associated with `citizen science' initiatives to make academic research more accessible but this may not be fully realised in cases where annotation tasks are laborious and low-skilled BIBREF72, BIBREF20.
Academic experts (22 datasets). Expert annotation is time-intensive but is considered to produce higher quality annotations. Waseem reports that `systems trained on expert annotations outperform systems trained on amateur annotations.' BIBREF73 and, similarly, D'Orazio et al. claim, `although expert coding is costly, it produces quality data.' BIBREF74. However, the notion of an `expert' remains somewhat fuzzy within abusive content detection research. In many cases, publications only report that `an expert' is used, without specifying the nature of their expertise – even though this can vary substantially. For example, an expert may refer to an NLP practitioner, an undergraduate student with only modest levels of training, a member of an attacked social group relevant to the dataset or a researcher with a doctorate in the study of prejudice. In general, we anticipate that experts in the social scientific study of prejudice/abuse would perform better at annotation tasks then NLP experts who may not have any direct expertise in the conceptual and theoretical issues of abusive content annotation. In particular, one risk of using NLP practitioners, whether students or professionals, is that they might `game' training datasets based on what they anticipate is technically feasible for existing detection systems. For instance, if existing systems perform poorly when presented with long range dependencies, humour or subtle forms of hate (which are nonetheless usually discernible to human readers) then NLP experts could unintentionally use this expectation to inform their annotations and not label such content as hateful.
Professional moderators (3 datasets). Professional moderators offer a standardized approach to content annotated, implemented by experienced workers. This should, in principle, result in high quality annotations. However, one concern is that moderators are output-focused as their work involves determining whether content should be allowed or removed from platforms; they may not provide detailed labels about the nature of abuse and may also set the bar for content labelled `abusive' fairly high, missing out on more nuance and subtle varieties. In most cases, moderators will annotate for a range of unacceptable content, such as spam and sexual content, and this must be marked in datasets.
A mix of crowdsourcing and experts (6 datasets).
Synthetic data creation (4 datasets). Synthetic datasets are an interesting option as they are inherently non-authentic and therefore not necessarily representative of how abuse manifests in real-world situations. However, if they are created in realistic conditions by experts or relevant content creators then they can mimic real behaviour and have the added advantage that they may have broader coverage of different types of abuse. They are also usually easier to share.
<<</Annotation process>>>
<<<Identity of the annotators>>>
The data statements framework given by Bender and Friedman emphasises the importance of understanding who has completed annotations. Knowing who the annotators are is important because `their own “social address" influences their experience with language and thus their perception of what they are annotating.' BIBREF18 In the context of online abuse, Binns et al. show that the gender of annotators systematically influences what annotations they provide BIBREF75. No annotator will be well-versed in all of the slang or coded meanings used to construct abusive language. Indeed, many of these coded meanings are deliberately covert and obfuscated BIBREF76. To help mitigate these challenges, annotators should be (a) well-qualified and (b) diverse. A homogeneous group of annotators will be poorly equipped to catch all instances of abuse in a corpus. Recruiting an intentionally mixed groups of annotators is likely to yield better recall of abuse and thus a more precise dataset BIBREF77.
Information about annotators is unfortunately scarce. In 23 of the training datasets no information is given about the identity of annotators; in 17 datasets very limited information is given, such as whether the annotator is a native speaker of the language; and in just 10 cases is detailed information given. Interestingly, only 4 out of these 10 datasets are in the English language. Relevant information about annotators can be split into (i) Demographic information and (ii) annotators' expertise and experience. In none of the training sets is the full range of annotator information made available, which includes:
Demographic information. The nature of the task affects what information should be provided, as well as the geographic and cultural context. For instance, research on Islamophobia should include, at the very least, information about annotators' religious affiliation. Relevant variables include:
Age
Ethnicity and race
Religion
Gender
Sexual Orientation
Expertise and experience. Relevant variables include:
Field of research
Years of experience
Research status (e.g. research assistant or post-doc)
Personal experiences of abuse. In our review, none of the datasets contained systematic information about whether annotators had been personally targeted by abuse or had viewed such abuse online, even though this can impact annotators' perceptions. Relevant variables include:
Experiences of being targeted by online abuse.
Experiences of viewing online abuse.
<<</Identity of the annotators>>>
<<<Guidelines for annotation>>>
A key source of variation across datasets is whether annotators were given detailed guidelines, very minimal guidelines or no guidelines at all. Analysing this issue is made difficult by the fact that many dataset creators do not share their annotation guidelines. 21 of the datasets we study do not provide the guidelines and 14 only provide them in a highly summarised form. In just 15 datasets is detailed information given (and these are reported on in just 9 publications). Requiring researchers to publish annotation guidelines not only helps future researchers to better understand what datasets contain but also to improve and extend them. This could be crucial for improving the quality of annotations; as Ross et al. recommend, `raters need more detailed instructions for annotation.' BIBREF78
The degree of detail given in guidelines is linked to how the notion of `abuse' is understood. Some dataset creators construct clear and explicit guidelines in an attempt to ensure that annotations are uniform and align closely with social scientific concepts. In other cases, dataset creators allow annotators to apply their own perception. For instance, in their Portuguese language dataset, Fortuna et al. ask annotators to `evaluate if according to your opinion, these tweets contain hate speech' BIBREF38. The risk here is that authors' perceptions may differ considerably; Salminen et al. show that online hate interpretation varies considerably across individuals BIBREF79. This is also reflected in inter-annotator agreement scores for abusive content, which is often very low, particularly for tasks which deploy more than just a binary taxonomy. However, it is unlikely that annotators could ever truly divorce themselves from their own social experience and background to decide on a single `objective' annotation. Abusive content annotation is better understood, epistemologically, as an intersubjective process in which agreement is constructed, rather than an objective process in which a `true' annotation is `found'. For this reason, some researchers have shifted the question of `how can we achieve the correct annotation?' to `who should decide what the correct annotation is?' BIBREF73. Ultimately, whether annotators should be allowed greater freedom in making annotations, and whether this results in higher quality datasets, needs further research and conceptual examination.
Some aspects of abusive language present fundamental issues that are prone to unreliable annotation, such as Irony, Calumniation and Intent. They are intrinsically difficult to annotate given a third-person perspective on a piece of text as they involve making a judgement about indeterminate issues. However, they cannot be ignored given their prevalence in abusive content and their importance to how abuse is expressed. Thus, although they are fundamentally conceptual problems, these issues also present practical problems for annotators, and should be addressed explicitly in coding guidelines. Otherwise, as BIBREF80 note, these issues are likely to drive type II errors in classification, i.e. labelling non-hate-speech utterances as hate speech.
<<<Irony>>>
This covers statements that have a meaning contrary to that one might glean at first reading. Lachenicht BIBREF81 notes that Irony goes against Grice's quality maxim, and as such Ironic content requires closer attention from the reader as it is prone to being misinterpreted. Irony is a particularly difficult issue as in some cases it is primarily intended to provide humour (and thus might legitimately be considered non-abusive) but in other cases is used as a way of veiling genuine abuse. Previous research suggests that the problem is widespread. Sanguinetti et al. BIBREF82 find irony in 11% of hateful tweets in Italian. BIBREF25 find that irony is one of the most common phenomena in self-deleted comments; and that the prevalence of irony is 33.9% amongst deleted comments in a Croatian comment dataset and 18.1% amongst deleted comments in a Slovene comment dataset. Furthermore, annotating irony (as well as related constructs, such as sarcasm and humour) is inherently difficult. BIBREF83 report that agreement on sarcasm amongst annotators working in English is low, something echoed by annotations of Danish content BIBREF84. Irony is also one of the most common reasons for content to be re-moderated on appeal, according to Pavlopoulos et al. BIBREF24.
<<</Irony>>>
<<<Calumniation>>>
This covers false statements, slander, and libel. From the surveyed set, this is annotated in datasets for Greek BIBREF24 and for Croatian and Slovene BIBREF25. Its prevalence varies considerably across these two datasets and reliable estimations of the prevalence of false statements are not available. Calumniation is not only an empirical issue, it also raises conceptual problems: should false information be considered abusive if it slanders or demeans a person? However, if the information is then found out to be true does it make the content any less abusive? Given the contentiousness of `objectivity', and the lack of consensus about most issues in a `post-truth' age BIBREF85, who should decide what is considered true? And, finally, how do we determine whether the content creator knows whether something is true? These ontological, epistemological and social questions are fundamental to the issue of truth and falsity in abusive language. Understandably, most datasets do not taken any perspective on the truth and falsity of content. This is a practical solution: given error rates in abusive language detection as well as error rates in fact-checking, a system which combined both could be inapplicable in practice.
<<</Calumniation>>>
<<<Intent>>>
This information about the utterer's state of mind is a core part of how many types of abusive language are defined. Intent is usually used to emphasize the wrongness of abusive behaviour, such as spreading, inciting, promoting or justifying hatred or violence towards a given target, or sending a message that aims at dehumanising, delegitimising, hurting or intimidating them BIBREF82. BIBREF81 postulate that “aggravation, invective and rudeness ... may be performed with varying degrees of intention to hurt", and cite five legal degrees of intent BIBREF86. However, it is difficult to discern the intent of another speaker in a verbal conversation between humans, and even more difficult to do so through written and computer-mediated communications BIBREF87. Nevertheless, intent is particularly important for some categories of abuse such as bullying, maliciousness and hostility BIBREF34, BIBREF32. Most of the guidelines for the datasets we have studied do not contain an explicit discussion of intent, although there are exceptions. BIBREF88 include intent as a core part of their annotation standard, noting that understanding context (such as by seeing a speakers' other online messages) is crucial to achieving quality annotations. However, this proposition poses conceptual challenges given that people's intent can shift over time. Deleted comments have been used to study potential expressions of regret by users and, as such, a change in their intent BIBREF89, BIBREF25; this has also been reported as a common motivator even in self-deletion of non-abusive language BIBREF90. Equally, engaging in a sequence of targeted abusive language is an indicator of aggressive intent, and appears in several definitions. BIBREF23 require an “intent to physically assert power over women" as a requirement for multiple categories of misogynistic behaviour. BIBREF34 find that messages that are “unapologetically or intentionally offensive" fit in the highest grade of trolling under their schema.
Kenny et al. BIBREF86 note how sarcasm, irony, and humour complicate the picture of intent by introducing considerable difficulties in discerning the true intent of speakers (as discussed above). Part of the challenge is that many abusive terms, such as slurs and insults, are polysemic and may be co-opted by an ingroup into terms of entertainment and endearment BIBREF34.
<<</Intent>>>
<<</Guidelines for annotation>>>
<<</Annotation of training datasets>>>
<<</Analysis of training datasets>>>
<<<Dataset sharing>>>
<<<The challenges and opportunities of achieving Open Science>>>
All of the training datasets we analyse are publicly accessible and as such can be used by researchers other than the authors of the original publication. Sharing data is an important aspect of open science but also poses ethical and legal risks, especially in light of recent regulatory changes, such as the introduction of GPDR in the UK BIBREF91, BIBREF92. This problem is particularly acute with abusive content, which can be deeply shocking, and some training datasets from highly cited publications have not been made publicly available BIBREF93, BIBREF94, BIBREF95. Open science initiatives can also raise concerns amongst the public, who may not be comfortable with researchers sharing their personal data BIBREF96, BIBREF97.
The difficulty of sharing data in sensitive areas of research is reflected by the Islamist extremism research website, `Jihadology'. It chose to restrict public access in 2019, following efforts by Home Office counter-terrorism officials to shut it down completely. They were concerned that, whilst it aimed to support academic research into Islamist extremism, it may have inadvertently enabled individuals to radicalise by making otherwise banned extremist material available. By working with partners such as the not-for-profit Tech Against Terrorism, Jihadology created a secure area in the website, which can only be accessed by approved researchers. Some of the training datasets in our list have similar requirements, and can only be accessed following a registration process.
Open sharing of datasets is not only a question of scientific integrity and a powerful way of advancing scientific knowledge. It is also, fundamentally, a question of fairness and power. Opening access to datasets will enable less-well funded researchers and organisations, which includes researchers in the Global South and those working for not-for-profit organisations, to steer and contribute to research. This is a particularly pressing issue in a field which is directly concerned with the experiences of often-marginalised communities and actors BIBREF36. For instance, one growing concern is the biases encoded in detection systems and the impact this could have when they are applied in real-world settings BIBREF9, BIBREF10. This research could be further advanced by making more datasets and detection systems more easily available. For instance, Binns et al. use the detailed metadata in the datasets provided by Wulczyn et al. to investigate how the demographics of annotators impacts the annotations they make BIBREF75, BIBREF29. The value of such insights is only clear after the dataset has been shared – and, equally, is only possible because of data sharing.
More effective ways of sharing datasets would address the fact that datasets often deteriorate after they have been published BIBREF13. Several of the most widely used datasets provide only the annotations and IDs and must be `rehydrated' to collect the content. Both of the datasets provided by Waseem and Hovy and Founta et al. must be collected in this way BIBREF98, BIBREF36, and both have degraded considerably since they were first released as the tweets are no longer available on Twitter. Chung et al. also estimate that within 12 months the recently released dataset for counterspeech by Mathew et al. had lost more than 60% of its content BIBREF65, BIBREF58. Dataset degradation poses three main risks: First, if less data is available then there is a greater likelihood of overfitting. Second, the class distributions usually change as proportionally more of the abusive content is taken down than the non-abusive. Third, it is also likely that the more overt forms of abuse are taken down, rather than the covert instances, thereby changing the qualitative nature of the dataset.
<<</The challenges and opportunities of achieving Open Science>>>
<<<Research infrastructure: Solutions for sharing training datasets>>>
The problem of data access and sharing remains unresolved in the field of abusive content detection, much like other areas of computational research BIBREF99. At present, an ethical, secure and easy way of sharing sensitive tools and resources has not been developed and adopted in the field. More effective dataset sharing would (1) greater collaboration amongst researchers, (2) enhance the reproducibility of research by encouraging greater scrutiny BIBREF100, BIBREF101, BIBREF102 and (3) substantively advance the field by enabling future researchers to better understand the biases and limitations of existing research and to identify new research directions.
There are two main challenges which must be overcome to ensure that training datasets can be shared and used by future researchers. First, dataset quality: the size, class distribution and quality of their content must be maintained. Second, dataset access: access to datasets must be controlled so that researchers can use them, whilst respecting platforms' Terms of Service and not allowing potential extremists from having access. These problems are closely entwined and the solutions available, which follow, have implications for both of them.
Synthetic datasets. Four of the datasets we have reviewed were developed synthetically. This resolves the dataset quality problem but introduces additional biases and limitations because the data is not real. Synthetic datasets still need to be shared in such a way as to limit access for potential extremists but face no challenges from Platforms' Terms of Services.
Data `philanthropy' or `donations'. These are defined as `the act of an individual actively consenting to donate their personal data for research' BIBREF97. Donated data from many individuals could then be combined and shared – but it would still need to be annotated. A further challenge is that many individuals who share abusive content may be unwilling to `donate' their data as this is commonly associated with prosocial motivations, creating severe class imbalances BIBREF97. Data donations could also open new moral and ethical issues; individuals' privacy could be impacted if data is re-analysed to derive new unexpected insights BIBREF103. Informed consent is difficult given that the exact nature of analyses may not be known in advance. Finally, data donations alone do not solve how access can be responsibly protected and how platforms' Terms of Service can be met. For these reasons, data donations are unlikely to be a key part of future research infrastructure for abusive content detection.
Platform-backed sharing. Platforms could share datasets and support researchers' access. There are no working examples of this in abusive content detection research, but it has been successfully used in other research areas. For instance, Twitter has made a large dataset of accounts linked to potential information operations, known as the “IRA" dataset (Internet Research Agency). This would require considerably more interfaces between academia and industry, which may be difficult given the challenges associated with existing initiatives, such as Social Science One. However, in the long term, we propose that this is the most effective solution for the problem of sharing training datasets. Not only because it removes Terms of Service limitations but also because platforms have large volumes of original content which has been annotated in a detailed way. This could take one of two forms: platforms either make content which has violated their Community Guidelines available directly or they provide special access post-hoc to datasets which researchers have collected publicly through their API - thereby making sure that datasets do not degrade over time.
Data trusts. Data trusts have been described as a way of sharing data `in a fair, safe and equitable way' ( BIBREF104 p. 46). However, there is considerable disagreement as to what they entail and how they would operate in practice BIBREF105. The Open Data Institute identifies that data trusts aim to make data open and accessible by providing a framework for storing and accessing data, terms and mechanisms for resolving disputes and, in some cases, contracts to enforce them. For abusive content training datasets, this would provide a way of enabling datasets to be shared, although it would require considerable institutional, legal and financial commitments.
Arguably, the easiest way of ensuring data can be shared is to maintain a very simple data trust, such as a database, which would contain all available abusive content training datasets. This repository would need to be permissioned and access controlled to address concerns relating to privacy and ethics. Such a repository could substantially reduce the burden on researchers; once they have been approved to the repository, they could access all datasets publicly available – different levels of permission could be implemented for different datasets, depending on commercial or research sensitivity. Furthermore, this repository could contain all of the metadata reported with datasets and such information could be included at the point of deposit, based on the `data statements' work of Bender and Friedman BIBREF18. A simple API could be developed for depositing and reading data, similar to that of the HateBase. The permissioning system could be maintained either through a single institution or, to avoid power concentrating amongst a small group of researchers, through a decentralised blockchain.
<<</Research infrastructure: Solutions for sharing training datasets>>>
<<<A new repository of training datasets: Hatespeechdata.com>>>
The resources and infrastructure to create a dedicated data trust and API for sharing abusive content training datasets is substantial and requires considerable further engagement with research teams in this field. In the interim, to encourage greater sharing of datasets, we have launched a dedicated website which contains all of the datasets analysed here: https://hatespeechdata.com. Based on the analysis in the previous sections, we have also provided partial data statements BIBREF18. The website also contains previously published abusive keyword dictionaries, which are not analysed here but some researchers may find useful. Note that the website only contains information/data which the original authors have already made publicly available elsewhere. It will be updated with new datasets in the future.
<<</A new repository of training datasets: Hatespeechdata.com>>>
<<</Dataset sharing>>>
<<<Best Practices for training dataset creation>>>
Much can be learned from existing efforts to create abusive language datasets. We identify best practices which emerge at four distinct points in the process of creating a training dataset: (1) task formation, (2) data selection, (3) annotation, and (4) documentation.
<<<Task formation: Defining the task addressed by the dataset>>>
Dataset creation should be `problem driven' BIBREF106 and should address a well-defined and specific task, with a clear motivation. This will directly inform the taxonomy design, which should be well-specified and engage with social scientific theory as needed. Defining a clear task which the dataset addresses is especially important given the maturation of the field, ongoing terminological disagreement and the complexity of online abuse. The diversity of phenomena that fits under the umbrella of abusive language means that `general purpose' datasets are unlikely to advance the field. New datasets are most valuable when they address a new target, generator, phenomenon, or domain. Creating datasets which repeat existing work is not nearly as valuable.
<<</Task formation: Defining the task addressed by the dataset>>>
<<<Selecting data for abusive language annotation>>>
Once the task is established, dataset creators should select what language will be annotated, where data will be sampled from and how sampling will be completed. Any data selection exercise is bound to give bias, and so it is important to record what decisions are made (and why) in this step. Dataset builders should have a specific target size in mind and also have an idea of the minimum amount of data this is likely to be needed for the task. This is also where steps 1 and 2 intersect: the data selection should be driven by the problem that is addressed rather than what is easy to collect. Ensuring there are enough positive examples of abuse will always be challenging as the prevalence of abuse is so low. However, given that purposive sampling inevitably introduces biases, creators should explore a range of options before determining the best one – and consider using multiple sampling methods at once, such as including data from different times, different locations, different types of users and different platforms. Other options include using measures of linguistic diversity to maximize the variety of text included in datasets, or including words that cluster close to known abusive terms.
<<</Selecting data for abusive language annotation>>>
<<<Annotating abusive language>>>
Annotators must be hired, trained and given appropriate guidelines. Annotators work best with solid guidelines, that are easy to grasp and have clear examples BIBREF107. The best examples are both illustrative, in order to capture the concepts (such as `threatening language') and provide insight into `edge cases', which is content that only just crosses the line into abuse. Decisions should be made about how to handle intrinsically difficult aspects of abuse, such as irony, calumniation and intent (see above). Annotation guidelines should be developed iteratively by dataset creators; by working through the data, rules can be established for difficult or counter-intuitive coding decisions, and a set of shared practices developed. Annotators should be included in this iterative process. Discussions with annotators the language that they have seen “in the field" offers an opportunity to enhance and refine guidelines - and even taxonomies. Such discussions will lead to more consistent data and provide a knowledge base to draw on for future work. To achieve this, it is important to adopt an open culture where annotators are comfortable providing open feedback and also describing their uncertainties. Annotators should also be given emotional and practical support (as well as appropriate financial compensation), and the harmful and potentially triggering effects of annotating online abuse should be recognised at all times. For a set of guidelines to help protect the well-being of annotators, see BIBREF13.
<<</Annotating abusive language>>>
<<<Documenting methods, data, and annotators>>>
The best training datasets provide as much information as possible and are well-documented. When the method behind them is unclear, they are hard to evaluate, use and build on. Providing as much information as possible can open new and unanticipated analyses and gives more agency to future researchers who use the dataset to create classifiers. For instance, if all annotators' codings are provided (rather than just the `final' decision) then a more nuanced and aware classifier could be developed as, in some cases, it can be better to maximise recall of annotations rather than maximise agreement BIBREF77.
Our review found that most datasets have poor methodological descriptions and few (if any) provide enough information to construct an adequate data statement. It is crucial that dataset creators are up front about their biases and limitations: every dataset is biased, and this is only problematic when the biases are unknown. One strategy for doing this is to maintain a document of decisions made when designing and creating the dataset and to then use it to describe to readers the rationale behind decisions. Details about the end-to-end dataset creation process are welcomed. For instance, if the task is crowdsourced then a screenshot of the micro-task presented to workers should be included, and the top-level parameters should be described (e.g. number of workers, maximum number of tasks per worker, number of annotations per piece of text) BIBREF20. If a dedicated interface is used for the annotation, this should also be described and screenshotted as the interface design can influence the annotations.
<<</Documenting methods, data, and annotators>>>
<<<Best practice summary>>>
Unfortunately, as with any burgeoning field, there is confusion and overlap around many of the phenomena discussed in this paper; coupled with the high degree of variation in the quality of method descriptions, it has lead to many pieces of research that are hard to combine, compare, or re-use. Our reflections on best practices are driven by this review and the difficulties of creating high quality training datasets. For future researchers, we summarise our recommendations in the following seven points:
Bear in mind the purpose of the dataset; design the dataset to help address questions and problems from previous research.
Avoid using `easy to access' data, and instead explore new sources which may have greater diversity. Consider what biases may be created by your sampling method.
Determine size based on data sparsity and having enough positive classes rather than `what is possible'.
Establish a clear taxonomy to be used for the task, with meaningful and theoretically sound categories.
Provide annotators with guidelines; develop them iteratively and publish them with your dataset. Consider using trained annotators given the complexities of abusive content.
Involve people who have direct experience of the abuse which you are studying whenever possible (and provided that you can protect their well-being).
Report on every step of the research through a Data Statement.
<<</Best practice summary>>>
<<</Best Practices for training dataset creation>>>
<<<Conclusion>>>
This paper examined a large set of datasets for the creation of abusive content detection systems, providing insight into what they contain, how they are annotated, and how tasks have been framed. Based on an evidence-driven review, we provided an extended discussion of how to make training datasets more readily available and useful, including the challenges and opportunities of open science as well as the need for more research infrastructure. We reported on the development of hatespeechdata.com – a new repository for online abusive content training datasets. Finally, we outlined best practices for creation of training datasets for detection of online abuse. We have effectively met the four research aims elaborated at the start of the paper.
Training detection systems for online abuse is a substantial challenge with real social consequences. If we want the systems we develop to be useable, scalable and with few biases then we need to train them on the right data: garbage in will only lead to garbage out.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Introduction, Best Practices for training dataset creation"
],
"type": "disordered_section"
}
|
1911.02116
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Unsupervised Cross-lingual Representation Learning at Scale
<<<Abstract>>>
This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +13.8% average accuracy on XNLI, +12.3% average F1 score on MLQA, and +2.1% average F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 11.8% in XNLI accuracy for Swahili and 9.2% for Urdu over the previous XLM model. We also present a detailed empirical evaluation of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-Ris very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make XLM-R code, data, and models publicly available.
<<</Abstract>>>
<<<Introduction>>>
The goal of this paper is to improve cross-lingual language understanding (XLU), by carefully studying the effects of training unsupervised cross-lingual representations at a very large scale. We present XLM-R, a transformer-based multilingual masked language model pre-trained on text in 100 languages, which obtains state-of-the-art performance on cross-lingual classification, sequence labeling and question answering.
Multilingual masked language models (MLM) like mBERT BIBREF0 and XLM BIBREF1 have pushed the state-of-the-art on cross-lingual understanding tasks by jointly pretraining large Transformer models BIBREF2 on many languages. These models allow for effective cross-lingual transfer, as seen in a number of benchmarks including cross-lingual natural language inference BIBREF3, BIBREF4, BIBREF5, question answering BIBREF6, BIBREF7, and named entity recognition BIBREF8, BIBREF9. However, all of these studies pre-train on Wikipedia, which provides a relatively limited scale especially for lower resource languages.
In this paper, we first present a comprehensive analysis of the trade-offs and limitations of multilingual language models at scale, inspired by recent monolingual scaling efforts BIBREF10. We measure the trade-off between high-resource and low-resource languages and the impact of language sampling and vocabulary size. The experiments expose a trade-off as we scale the number of languages for a fixed model capacity: more languages leads to better cross-lingual performance on low-resource languages up until a point, after which the overall performance on monolingual and cross-lingual benchmarks degrades. We refer to this tradeoff as the curse of multilinguality, and show that it can be alleviated by simply increasing model capacity. We argue, however, that this remains an important limitation for future XLU systems which may aim to improve performance with more modest computational budgets.
Our best model XLM-RoBERTa (XLM-R) outperforms mBERT on cross-lingual classification by up to 21% accuracy on low-resource languages like Swahili and Urdu. It outperforms the previous state of the art by 3.9% average accuracy on XNLI, 2.1% average F1-score on Named Entity Recognition, and 8.4% average F1-score on cross-lingual Question Answering. We also evaluate monolingual fine tuning on the GLUE and XNLI benchmarks, where XLM-R obtains results competitive with state-of-the-art monolingual models, including RoBERTa BIBREF10. These results demonstrate, for the first time, that it is possible to have a single large model for all languages, without sacrificing per-language performance. We will make our code, models and data publicly available, with the hope that this will help research in multilingual NLP and low-resource language understanding.
<<</Introduction>>>
<<<Related Work>>>
From pretrained word embeddings BIBREF11, BIBREF12 to pretrained contextualized representations BIBREF13, BIBREF14 and transformer based language models BIBREF15, BIBREF0, unsupervised representation learning has significantly improved the state of the art in natural language understanding. Parallel work on cross-lingual understanding BIBREF16, BIBREF14, BIBREF1 extends these systems to more languages and to the cross-lingual setting in which a model is learned in one language and applied in other languages.
Most recently, BIBREF0 and BIBREF1 introduced mBERT and XLM - masked language models trained on multiple languages, without any cross-lingual supervision. BIBREF1 propose translation language modeling (TLM) as a way to leverage parallel data and obtain a new state of the art on the cross-lingual natural language inference (XNLI) benchmark BIBREF5. They further show strong improvements on unsupervised machine translation and pretraining for sequence generation. Separately, BIBREF8 demonstrated the effectiveness of multilingual models like mBERT on sequence labeling tasks. BIBREF17 showed gains over XLM using cross-lingual multi-task learning, and BIBREF18 demonstrated the efficiency of cross-lingual data augmentation for cross-lingual NLI. However, all of this work was at a relatively modest scale, in terms of the amount of training data, as compared to our approach.
The benefits of scaling language model pretraining by increasing the size of the model as well as the training data has been extensively studied in the literature. For the monolingual case, BIBREF19 show how large-scale LSTM models can obtain much stronger performance on language modeling benchmarks when trained on billions of tokens. GPT BIBREF15 also highlights the importance of scaling the amount of data and RoBERTa BIBREF10 shows that training BERT longer on more data leads to significant boost in performance. Inspired by RoBERTa, we show that mBERT and XLM are undertuned, and that simple improvements in the learning procedure of unsupervised MLM leads to much better performance. We train on cleaned CommonCrawls BIBREF20, which increase the amount of data for low-resource languages by two orders of magnitude on average. Similar data has also been shown to be effective for learning high quality word embeddings in multiple languages BIBREF21.
Several efforts have trained massively multilingual machine translation models from large parallel corpora. They uncover the high and low resource trade-off and the problem of capacity dilution BIBREF22, BIBREF23. The work most similar to ours is BIBREF24, which trains a single model in 103 languages on over 25 billion parallel sentences. BIBREF25 further analyze the representations obtained by the encoder of a massively multilingual machine translation system and show that it obtains similar results to mBERT on cross-lingual NLI. Our work, in contrast, focuses on the unsupervised learning of cross-lingual representations and their transfer to discriminative tasks.
<<</Related Work>>>
<<<Model and Data>>>
In this section, we present the training objective, languages, and data we use. We follow the XLM approach BIBREF1 as closely as possible, only introducing changes that improve performance at scale.
<<<Masked Language Models.>>>
We use a Transformer model BIBREF2 trained with the multilingual MLM objective BIBREF0, BIBREF1 using only monolingual data. We sample streams of text from each language and train the model to predict the masked tokens in the input. We apply subword tokenization directly on raw text data using Sentence Piece BIBREF26 with a unigram language model BIBREF27. We sample batches from different languages using the same sampling distribution as BIBREF1, but with $\alpha =0.3$. Unlike BIBREF1, we do not use language embeddings, which allows our model to better deal with code-switching. We use a large vocabulary size of 250K with a full softmax and train two different models: XLM-R Base (L = 12, H = 768, A = 12, 270M params) and XLM-R (L = 24, H = 1024, A = 16, 550M params). For all of our ablation studies, we use a BERTBase architecture with a vocabulary of 150K tokens. Appendix SECREF8 goes into more details about the architecture of the different models referenced in this paper.
<<</Masked Language Models.>>>
<<<Scaling to a hundred languages.>>>
XLM-R is trained on 100 languages; we provide a full list of languages and associated statistics in Appendix SECREF7. Figure specifies the iso codes of 88 languages that are shared across XLM-R and XLM-100, the model from BIBREF1 trained on Wikipedia text in 100 languages.
Compared to previous work, we replace some languages with more commonly used ones such as romanized Hindi and traditional Chinese. In our ablation studies, we always include the 7 languages for which we have classification and sequence labeling evaluation benchmarks: English, French, German, Russian, Chinese, Swahili and Urdu. We chose this set as it covers a suitable range of language families and includes low-resource languages such as Swahili and Urdu. We also consider larger sets of 15, 30, 60 and all 100 languages. When reporting results on high-resource and low-resource, we refer to the average of English and French results, and the average of Swahili and Urdu results respectively.
<<</Scaling to a hundred languages.>>>
<<<Scaling the Amount of Training Data.>>>
Following BIBREF20, we build a clean CommonCrawl Corpus in 100 languages. We use an internal language identification model in combination with the one from fastText BIBREF28. We train language models in each language and use it to filter documents as described in BIBREF20. We consider one CommonCrawl dump for English and twelve dumps for all other languages, which significantly increases dataset sizes, especially for low-resource languages like Burmese and Swahili.
Figure shows the difference in size between the Wikipedia Corpus used by mBERT and XLM-100, and the CommonCrawl Corpus we use. As we show in Section SECREF19, monolingual Wikipedia corpora are too small to enable unsupervised representation learning. Based on our experiments, we found that a few hundred MiB of text data is usually a minimal size for learning a BERT model.
<<</Scaling the Amount of Training Data.>>>
<<</Model and Data>>>
<<<Evaluation>>>
We consider four evaluation benchmarks. For cross-lingual understanding, we use cross-lingual natural language inference, named entity recognition, and question answering. We use the GLUE benchmark to evaluate the English performance of XLM-R and compare it to other state-of-the-art models.
<<<Cross-lingual Natural Language Inference (XNLI).>>>
The XNLI dataset comes with ground-truth dev and test sets in 15 languages, and a ground-truth English training set. The training set has been machine-translated to the remaining 14 languages, providing synthetic training data for these languages as well. We evaluate our model on cross-lingual transfer from English to other languages. We also consider three machine translation baselines: (i) translate-test: dev and test sets are machine-translated to English and a single English model is used (ii) translate-train (per-language): the English training set is machine-translated to each language and we fine-tune a multiligual model on each training set (iii) translate-train-all (multi-language): we fine-tune a multilingual model on the concatenation of all training sets from translate-train. For the translations, we use the official data provided by the XNLI project.
<<</Cross-lingual Natural Language Inference (XNLI).>>>
<<<Named Entity Recognition.>>>
For NER, we consider the CoNLL-2002 BIBREF29 and CoNLL-2003 BIBREF30 datasets in English, Dutch, Spanish and German. We fine-tune multilingual models either (1) on the English set to evaluate cross-lingual transfer, (2) on each set to evaluate per-language performance, or (3) on all sets to evaluate multilingual learning. We report the F1 score, and compare to baselines from BIBREF31 and BIBREF32.
<<</Named Entity Recognition.>>>
<<<Cross-lingual Question Answering.>>>
We use the MLQA benchmark from BIBREF7, which extends the English SQuAD benchmark to Spanish, German, Arabic, Hindi, Vietnamese and Chinese. We report the F1 score as well as the exact match (EM) score for cross-lingual transfer from English.
<<</Cross-lingual Question Answering.>>>
<<<GLUE Benchmark.>>>
Finally, we evaluate the English performance of our model on the GLUE benchmark BIBREF33 which gathers multiple classification tasks, such as MNLI BIBREF4, SST-2 BIBREF34, or QNLI BIBREF35. We use BERTLarge and RoBERTa as baselines.
<<</GLUE Benchmark.>>>
<<</Evaluation>>>
<<<Analysis and Results>>>
In this section, we perform a comprehensive analysis of multilingual masked language models. We conduct most of the analysis on XNLI, which we found to be representative of our findings on other tasks. We then present the results of XLM-R on cross-lingual understanding and GLUE. Finally, we compare multilingual and monolingual models, and present results on low-resource languages.
<<<Improving and Understanding Multilingual Masked Language Models>>>
Much of the work done on understanding the cross-lingual effectiveness of mBERT or XLM BIBREF8, BIBREF9, BIBREF7 has focused on analyzing the performance of fixed pretrained models on downstream tasks. In this section, we present a comprehensive study of different factors that are important to pretraining large scale multilingual models. We highlight the trade-offs and limitations of these models as we scale to one hundred languages.
<<<Transfer-dilution trade-off and Curse of Multilinguality.>>>
Model capacity (i.e. the number of parameters in the model) is constrained due to practical considerations such as memory and speed during training and inference. For a fixed sized model, the per-language capacity decreases as we increase the number of languages. While low-resource language performance can be improved by adding similar higher-resource languages during pretraining, the overall downstream performance suffers from this capacity dilution BIBREF24. Positive transfer and capacity dilution have to be traded off against each other.
We illustrate this trade-off in Figure , which shows XNLI performance vs the number of languages the model is pretrained on. Initially, as we go from 7 to 15 languages, the model is able to take advantage of positive transfer and this improves performance, especially on low resource languages. Beyond this point the curse of multilinguality kicks in and degrades performance across all languages. Specifically, the overall XNLI accuracy decreases from 71.8% to 67.7% as we go from XLM-7 to XLM-100. The same trend can be observed for models trained on the larger CommonCrawl Corpus.
The issue is even more prominent when the capacity of the model is small. To show this, we pretrain models on Wikipedia Data in 7, 30 and 100 languages. As we add more languages, we make the Transformer wider by increasing the hidden size from 768 to 960 to 1152. In Figure , we show that the added capacity allows XLM-30 to be on par with XLM-7, thus overcoming the curse of multilinguality. The added capacity for XLM-100, however, is not enough and it still lags behind due to higher vocabulary dilution (recall from Section SECREF3 that we used a fixed vocabulary size of 150K for all models).
<<</Transfer-dilution trade-off and Curse of Multilinguality.>>>
<<<High-resource/Low-resource trade-off.>>>
The allocation of the model capacity across languages is controlled by several parameters: the training set size, the size of the shared subword vocabulary, and the rate at which we sample training examples from each language. We study the effect of sampling on the performance of high-resource (English and French) and low-resource (Swahili and Urdu) languages for an XLM-100 model trained on Wikipedia (we observe a similar trend for the construction of the subword vocab). Specifically, we investigate the impact of varying the $\alpha $ parameter which controls the exponential smoothing of the language sampling rate. Similar to BIBREF1, we use a sampling rate proportional to the number of sentences in each corpus. Models trained with higher values of $\alpha $ see batches of high-resource languages more often. Figure shows that the higher the value of $\alpha $, the better the performance on high-resource languages, and vice-versa. When considering overall performance, we found $0.3$ to be an optimal value for $\alpha $, and use this for XLM-R.
<<</High-resource/Low-resource trade-off.>>>
<<<Importance of Capacity and Vocabulary Size.>>>
In previous sections and in Figure , we showed the importance of scaling the model size as we increase the number of languages. Similar to the overall model size, we argue that scaling the size of the shared vocabulary (the vocabulary capacity) can improve the performance of multilingual models on downstream tasks. To illustrate this effect, we train XLM-100 models on Wikipedia data with different vocabulary sizes. We keep the overall number of parameters constant by adjusting the width of the transformer. Figure shows that even with a fixed capacity, we observe a 2.8% increase in XNLI average accuracy as we increase the vocabulary size from 32K to 256K. This suggests that multilingual models can benefit from allocating a higher proportion of the total number of parameters to the embedding layer even though this reduces the size of the Transformer. With bigger models, we believe that using a vocabulary of up to 2 million tokens with an adaptive softmax BIBREF36, BIBREF37 should improve performance even further, but we leave this exploration to future work. For simplicity and given the computational constraints, we use a vocabulary of 250k for XLM-R.
We further illustrate the importance of this parameter, by training three models with the same transformer architecture (BERTBase) but with different vocabulary sizes: 128K, 256K and 512K. We observe more than 3% gains in overall accuracy on XNLI by simply increasing the vocab size from 128k to 512k.
<<</Importance of Capacity and Vocabulary Size.>>>
<<<Importance of large-scale training with more data.>>>
As shown in Figure , the CommonCrawl Corpus that we collected has significantly more monolingual data than the previously used Wikipedia corpora. Figure shows that for the same BERTBase architecture, all models trained on CommonCrawl obtain significantly better performance.
Apart from scaling the training data, BIBREF10 also showed the benefits of training MLMs longer. In our experiments, we observed similar effects of large-scale training, such as increasing batch size (see Figure ) and training time, on model performance. Specifically, we found that using validation perplexity as a stopping criterion for pretraining caused the multilingual MLM in BIBREF1 to be under-tuned. In our experience, performance on downstream tasks continues to improve even after validation perplexity has plateaued. Combining this observation with our implementation of the unsupervised XLM-MLM objective, we were able to improve the performance of BIBREF1 from 71.3% to more than 75% average accuracy on XNLI, which was on par with their supervised translation language modeling (TLM) objective. Based on these results, and given our focus on unsupervised learning, we decided to not use the supervised TLM objective for training our models.
<<</Importance of large-scale training with more data.>>>
<<<Simplifying multilingual tokenization with Sentence Piece.>>>
The different language-specific tokenization tools used by mBERT and XLM-100 make these models more difficult to use on raw text. Instead, we train a Sentence Piece model (SPM) and apply it directly on raw text data for all languages. We did not observe any loss in performance for models trained with SPM when compared to models trained with language-specific preprocessing and byte-pair encoding (see Figure ) and hence use SPM for XLM-R.
<<</Simplifying multilingual tokenization with Sentence Piece.>>>
<<</Improving and Understanding Multilingual Masked Language Models>>>
<<<Cross-lingual Understanding Results>>>
Based on these results, we adapt the setting of BIBREF1 and use a large Transformer model with 24 layers and 1024 hidden states, with a 250k vocabulary. We use the multilingual MLM loss and train our XLM-R model for 1.5 Million updates on five hundred 32GB Nvidia V100 GPUs with a batch size of 8192. We leverage the SPM-preprocessed text data from CommonCrawl in 100 languages and sample languages with $\alpha =0.3$. In this section, we show that it outperforms all previous techniques on cross-lingual benchmarks while getting performance on par with RoBERTa on the GLUE benchmark.
<<<XNLI.>>>
Table shows XNLI results and adds some additional details: (i) the number of models the approach induces (#M), (ii) the data on which the model was trained (D), and (iii) the number of languages the model was pretrained on (#lg). As we show in our results, these parameters significantly impact performance. Column #M specifies whether model selection was done separately on the dev set of each language ($N$ models), or on the joint dev set of all the languages (single model). We observe a 0.6 decrease in overall accuracy when we go from $N$ models to a single model - going from 71.3 to 70.7. We encourage the community to adopt this setting. For cross-lingual transfer, while this approach is not fully zero-shot transfer, we argue that in real applications, a small amount of supervised data is often available for validation in each language.
XLM-R sets a new state of the art on XNLI. On cross-lingual transfer, XLM-R obtains 80.1% accuracy, outperforming the XLM-100 and mBERT open-source models by 9.4% and 13.8% average accuracy. On the Swahili and Urdu low-resource languages, XLM-R outperforms XLM-100 by 13.8% and 9.3%, and mBERT by 21.6% and 13.7%. While XLM-R handles 100 languages, we also show that it outperforms the former state of the art Unicoder BIBREF17 and XLM (MLM+TLM), which handle only 15 languages, by 4.7% and 5% average accuracy respectively. Using the multilingual training of translate-train-all, XLM-R further improves performance and reaches 82.4% accuracy, a new overall state of the art for XNLI, outperforming Unicoder by 3.9%. Multilingual training is similar to practical applications where training sets are available in various languages for the same task. In the case of XNLI, datasets have been translated, and translate-train-all can be seen as some form of cross-lingual data augmentation BIBREF18, similar to back-translation BIBREF38.
<<</XNLI.>>>
<<<Question Answering.>>>
We also obtain new state of the art results on the MLQA cross-lingual question answering benchmark, introduced by BIBREF7. We follow their procedure by training on the English training data and evaluating on the 7 languages of the dataset. We report results in Table . XLM-R obtains F1 and accuracy scores of 70.0% and 52.2% while the previous state of the art was 61.6% and 43.5%. XLM-R also outperforms mBERT by 12.3% F1-score and 10.6% accuracy. It even outperforms BERT-Large on English, confirming its strong monolingual performance.
<<</Question Answering.>>>
<<</Cross-lingual Understanding Results>>>
<<<Multilingual versus Monolingual>>>
In this section, we present results of multilingual XLM models against monolingual BERT models.
<<<GLUE: XLM-R versus RoBERTa.>>>
Our goal is to obtain a multilingual model with strong performance on both, cross-lingual understanding tasks as well as natural language understanding tasks for each language. To that end, we evaluate XLM-R on the GLUE benchmark. We show in Table , that XLM-R obtains better average dev performance than BERTLarge by 1.3% and reaches performance on par with XLNetLarge. The RoBERTa model outperforms XLM-R by only 1.3% on average. We believe future work can reduce this gap even further by alleviating the curse of multilinguality and vocabulary dilution. These results demonstrate the possibility of learning one model for many languages while maintaining strong performance on per-language downstream tasks.
<<</GLUE: XLM-R versus RoBERTa.>>>
<<<XNLI: XLM versus BERT.>>>
A recurrent criticism against multilingual model is that they obtain worse performance than their monolingual counterparts. In addition to the comparison of XLM-R and RoBERTa, we provide the first comprehensive study to assess this claim on the XNLI benchmark. We extend our comparison between multilingual XLM models and monolingual BERT models on 7 languages and compare performance in Table . We train 14 monolingual BERT models on Wikipedia and CommonCrawl, and two XLM-7 models. We add slightly more capacity in the vocabulary size of the multilingual model for a better comparison. To our surprise - and backed by further study on internal benchmarks - we found that multilingual models can outperform their monolingual BERT counterparts. Specifically, in Table , we show that for cross-lingual transfer, monolingual baselines outperform XLM-7 for both Wikipedia and CC by 1.6% and 1.3% average accuracy. However, by making use of multilingual training (translate-train-all) and leveraging training sets coming from multiple languages, XLM-7 can outperform the BERT models: our XLM-7 trained on CC obtains 80.0% average accuracy on the 7 languages, while the average performance of monolingual BERT models trained on CC is 77.5%. This is a surprising result that shows that the capacity of multilingual models to leverage training data coming from multiple languages for a particular task can overcome the capacity dilution problem to obtain better overall performance.
<<</XNLI: XLM versus BERT.>>>
<<</Multilingual versus Monolingual>>>
<<<Representation Learning for Low-resource Languages>>>
We observed in Table that pretraining on Wikipedia for Swahili and Urdu performed similarly to a randomly initialized model; most likely due to the small size of the data for these languages. On the other hand, pretraining on CC improved performance by up to 10 points. This confirms our assumption that mBERT and XLM-100 rely heavily on cross-lingual transfer but do not model the low-resource languages as well as XLM-R. Specifically, in the translate-train-all setting, we observe that the biggest gains for XLM models trained on CC, compared to their Wikipedia counterparts, are on low-resource languages; 7% and 4.8% improvement on Swahili and Urdu respectively.
<<</Representation Learning for Low-resource Languages>>>
<<</Analysis and Results>>>
<<<Conclusion>>>
In this work, we introduced XLM-R, our new state of the art multilingual masked language model trained on 2.5 TB of newly created clean CommonCrawl data in 100 languages. We show that it provides strong gains over previous multilingual models like mBERT and XLM on classification, sequence labeling and question answering. We exposed the limitations of multilingual MLMs, in particular by uncovering the high-resource versus low-resource trade-off, the curse of multilinguality and the importance of key hyperparameters. We also expose the surprising effectiveness of multilingual models over monolingual models, and show strong improvements on low-resource languages.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Conclusion, Related Work"
],
"type": "disordered_section"
}
|
1911.02116
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Unsupervised Cross-lingual Representation Learning at Scale
<<<Abstract>>>
This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +13.8% average accuracy on XNLI, +12.3% average F1 score on MLQA, and +2.1% average F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 11.8% in XNLI accuracy for Swahili and 9.2% for Urdu over the previous XLM model. We also present a detailed empirical evaluation of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-Ris very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make XLM-R code, data, and models publicly available.
<<</Abstract>>>
<<<Introduction>>>
The goal of this paper is to improve cross-lingual language understanding (XLU), by carefully studying the effects of training unsupervised cross-lingual representations at a very large scale. We present XLM-R, a transformer-based multilingual masked language model pre-trained on text in 100 languages, which obtains state-of-the-art performance on cross-lingual classification, sequence labeling and question answering.
Multilingual masked language models (MLM) like mBERT BIBREF0 and XLM BIBREF1 have pushed the state-of-the-art on cross-lingual understanding tasks by jointly pretraining large Transformer models BIBREF2 on many languages. These models allow for effective cross-lingual transfer, as seen in a number of benchmarks including cross-lingual natural language inference BIBREF3, BIBREF4, BIBREF5, question answering BIBREF6, BIBREF7, and named entity recognition BIBREF8, BIBREF9. However, all of these studies pre-train on Wikipedia, which provides a relatively limited scale especially for lower resource languages.
In this paper, we first present a comprehensive analysis of the trade-offs and limitations of multilingual language models at scale, inspired by recent monolingual scaling efforts BIBREF10. We measure the trade-off between high-resource and low-resource languages and the impact of language sampling and vocabulary size. The experiments expose a trade-off as we scale the number of languages for a fixed model capacity: more languages leads to better cross-lingual performance on low-resource languages up until a point, after which the overall performance on monolingual and cross-lingual benchmarks degrades. We refer to this tradeoff as the curse of multilinguality, and show that it can be alleviated by simply increasing model capacity. We argue, however, that this remains an important limitation for future XLU systems which may aim to improve performance with more modest computational budgets.
Our best model XLM-RoBERTa (XLM-R) outperforms mBERT on cross-lingual classification by up to 21% accuracy on low-resource languages like Swahili and Urdu. It outperforms the previous state of the art by 3.9% average accuracy on XNLI, 2.1% average F1-score on Named Entity Recognition, and 8.4% average F1-score on cross-lingual Question Answering. We also evaluate monolingual fine tuning on the GLUE and XNLI benchmarks, where XLM-R obtains results competitive with state-of-the-art monolingual models, including RoBERTa BIBREF10. These results demonstrate, for the first time, that it is possible to have a single large model for all languages, without sacrificing per-language performance. We will make our code, models and data publicly available, with the hope that this will help research in multilingual NLP and low-resource language understanding.
<<</Introduction>>>
<<<Related Work>>>
From pretrained word embeddings BIBREF11, BIBREF12 to pretrained contextualized representations BIBREF13, BIBREF14 and transformer based language models BIBREF15, BIBREF0, unsupervised representation learning has significantly improved the state of the art in natural language understanding. Parallel work on cross-lingual understanding BIBREF16, BIBREF14, BIBREF1 extends these systems to more languages and to the cross-lingual setting in which a model is learned in one language and applied in other languages.
Most recently, BIBREF0 and BIBREF1 introduced mBERT and XLM - masked language models trained on multiple languages, without any cross-lingual supervision. BIBREF1 propose translation language modeling (TLM) as a way to leverage parallel data and obtain a new state of the art on the cross-lingual natural language inference (XNLI) benchmark BIBREF5. They further show strong improvements on unsupervised machine translation and pretraining for sequence generation. Separately, BIBREF8 demonstrated the effectiveness of multilingual models like mBERT on sequence labeling tasks. BIBREF17 showed gains over XLM using cross-lingual multi-task learning, and BIBREF18 demonstrated the efficiency of cross-lingual data augmentation for cross-lingual NLI. However, all of this work was at a relatively modest scale, in terms of the amount of training data, as compared to our approach.
The benefits of scaling language model pretraining by increasing the size of the model as well as the training data has been extensively studied in the literature. For the monolingual case, BIBREF19 show how large-scale LSTM models can obtain much stronger performance on language modeling benchmarks when trained on billions of tokens. GPT BIBREF15 also highlights the importance of scaling the amount of data and RoBERTa BIBREF10 shows that training BERT longer on more data leads to significant boost in performance. Inspired by RoBERTa, we show that mBERT and XLM are undertuned, and that simple improvements in the learning procedure of unsupervised MLM leads to much better performance. We train on cleaned CommonCrawls BIBREF20, which increase the amount of data for low-resource languages by two orders of magnitude on average. Similar data has also been shown to be effective for learning high quality word embeddings in multiple languages BIBREF21.
Several efforts have trained massively multilingual machine translation models from large parallel corpora. They uncover the high and low resource trade-off and the problem of capacity dilution BIBREF22, BIBREF23. The work most similar to ours is BIBREF24, which trains a single model in 103 languages on over 25 billion parallel sentences. BIBREF25 further analyze the representations obtained by the encoder of a massively multilingual machine translation system and show that it obtains similar results to mBERT on cross-lingual NLI. Our work, in contrast, focuses on the unsupervised learning of cross-lingual representations and their transfer to discriminative tasks.
<<</Related Work>>>
<<<Model and Data>>>
In this section, we present the training objective, languages, and data we use. We follow the XLM approach BIBREF1 as closely as possible, only introducing changes that improve performance at scale.
<<<Masked Language Models.>>>
We use a Transformer model BIBREF2 trained with the multilingual MLM objective BIBREF0, BIBREF1 using only monolingual data. We sample streams of text from each language and train the model to predict the masked tokens in the input. We apply subword tokenization directly on raw text data using Sentence Piece BIBREF26 with a unigram language model BIBREF27. We sample batches from different languages using the same sampling distribution as BIBREF1, but with $\alpha =0.3$. Unlike BIBREF1, we do not use language embeddings, which allows our model to better deal with code-switching. We use a large vocabulary size of 250K with a full softmax and train two different models: XLM-R Base (L = 12, H = 768, A = 12, 270M params) and XLM-R (L = 24, H = 1024, A = 16, 550M params). For all of our ablation studies, we use a BERTBase architecture with a vocabulary of 150K tokens. Appendix SECREF8 goes into more details about the architecture of the different models referenced in this paper.
<<</Masked Language Models.>>>
<<<Scaling to a hundred languages.>>>
XLM-R is trained on 100 languages; we provide a full list of languages and associated statistics in Appendix SECREF7. Figure specifies the iso codes of 88 languages that are shared across XLM-R and XLM-100, the model from BIBREF1 trained on Wikipedia text in 100 languages.
Compared to previous work, we replace some languages with more commonly used ones such as romanized Hindi and traditional Chinese. In our ablation studies, we always include the 7 languages for which we have classification and sequence labeling evaluation benchmarks: English, French, German, Russian, Chinese, Swahili and Urdu. We chose this set as it covers a suitable range of language families and includes low-resource languages such as Swahili and Urdu. We also consider larger sets of 15, 30, 60 and all 100 languages. When reporting results on high-resource and low-resource, we refer to the average of English and French results, and the average of Swahili and Urdu results respectively.
<<</Scaling to a hundred languages.>>>
<<<Scaling the Amount of Training Data.>>>
Following BIBREF20, we build a clean CommonCrawl Corpus in 100 languages. We use an internal language identification model in combination with the one from fastText BIBREF28. We train language models in each language and use it to filter documents as described in BIBREF20. We consider one CommonCrawl dump for English and twelve dumps for all other languages, which significantly increases dataset sizes, especially for low-resource languages like Burmese and Swahili.
Figure shows the difference in size between the Wikipedia Corpus used by mBERT and XLM-100, and the CommonCrawl Corpus we use. As we show in Section SECREF19, monolingual Wikipedia corpora are too small to enable unsupervised representation learning. Based on our experiments, we found that a few hundred MiB of text data is usually a minimal size for learning a BERT model.
<<</Scaling the Amount of Training Data.>>>
<<</Model and Data>>>
<<<Evaluation>>>
We consider four evaluation benchmarks. For cross-lingual understanding, we use cross-lingual natural language inference, named entity recognition, and question answering. We use the GLUE benchmark to evaluate the English performance of XLM-R and compare it to other state-of-the-art models.
<<<Cross-lingual Natural Language Inference (XNLI).>>>
The XNLI dataset comes with ground-truth dev and test sets in 15 languages, and a ground-truth English training set. The training set has been machine-translated to the remaining 14 languages, providing synthetic training data for these languages as well. We evaluate our model on cross-lingual transfer from English to other languages. We also consider three machine translation baselines: (i) translate-test: dev and test sets are machine-translated to English and a single English model is used (ii) translate-train (per-language): the English training set is machine-translated to each language and we fine-tune a multiligual model on each training set (iii) translate-train-all (multi-language): we fine-tune a multilingual model on the concatenation of all training sets from translate-train. For the translations, we use the official data provided by the XNLI project.
<<</Cross-lingual Natural Language Inference (XNLI).>>>
<<<Named Entity Recognition.>>>
For NER, we consider the CoNLL-2002 BIBREF29 and CoNLL-2003 BIBREF30 datasets in English, Dutch, Spanish and German. We fine-tune multilingual models either (1) on the English set to evaluate cross-lingual transfer, (2) on each set to evaluate per-language performance, or (3) on all sets to evaluate multilingual learning. We report the F1 score, and compare to baselines from BIBREF31 and BIBREF32.
<<</Named Entity Recognition.>>>
<<<Cross-lingual Question Answering.>>>
We use the MLQA benchmark from BIBREF7, which extends the English SQuAD benchmark to Spanish, German, Arabic, Hindi, Vietnamese and Chinese. We report the F1 score as well as the exact match (EM) score for cross-lingual transfer from English.
<<</Cross-lingual Question Answering.>>>
<<<GLUE Benchmark.>>>
Finally, we evaluate the English performance of our model on the GLUE benchmark BIBREF33 which gathers multiple classification tasks, such as MNLI BIBREF4, SST-2 BIBREF34, or QNLI BIBREF35. We use BERTLarge and RoBERTa as baselines.
<<</GLUE Benchmark.>>>
<<</Evaluation>>>
<<<Analysis and Results>>>
In this section, we perform a comprehensive analysis of multilingual masked language models. We conduct most of the analysis on XNLI, which we found to be representative of our findings on other tasks. We then present the results of XLM-R on cross-lingual understanding and GLUE. Finally, we compare multilingual and monolingual models, and present results on low-resource languages.
<<<Improving and Understanding Multilingual Masked Language Models>>>
Much of the work done on understanding the cross-lingual effectiveness of mBERT or XLM BIBREF8, BIBREF9, BIBREF7 has focused on analyzing the performance of fixed pretrained models on downstream tasks. In this section, we present a comprehensive study of different factors that are important to pretraining large scale multilingual models. We highlight the trade-offs and limitations of these models as we scale to one hundred languages.
<<<Transfer-dilution trade-off and Curse of Multilinguality.>>>
Model capacity (i.e. the number of parameters in the model) is constrained due to practical considerations such as memory and speed during training and inference. For a fixed sized model, the per-language capacity decreases as we increase the number of languages. While low-resource language performance can be improved by adding similar higher-resource languages during pretraining, the overall downstream performance suffers from this capacity dilution BIBREF24. Positive transfer and capacity dilution have to be traded off against each other.
We illustrate this trade-off in Figure , which shows XNLI performance vs the number of languages the model is pretrained on. Initially, as we go from 7 to 15 languages, the model is able to take advantage of positive transfer and this improves performance, especially on low resource languages. Beyond this point the curse of multilinguality kicks in and degrades performance across all languages. Specifically, the overall XNLI accuracy decreases from 71.8% to 67.7% as we go from XLM-7 to XLM-100. The same trend can be observed for models trained on the larger CommonCrawl Corpus.
The issue is even more prominent when the capacity of the model is small. To show this, we pretrain models on Wikipedia Data in 7, 30 and 100 languages. As we add more languages, we make the Transformer wider by increasing the hidden size from 768 to 960 to 1152. In Figure , we show that the added capacity allows XLM-30 to be on par with XLM-7, thus overcoming the curse of multilinguality. The added capacity for XLM-100, however, is not enough and it still lags behind due to higher vocabulary dilution (recall from Section SECREF3 that we used a fixed vocabulary size of 150K for all models).
<<</Transfer-dilution trade-off and Curse of Multilinguality.>>>
<<<High-resource/Low-resource trade-off.>>>
The allocation of the model capacity across languages is controlled by several parameters: the training set size, the size of the shared subword vocabulary, and the rate at which we sample training examples from each language. We study the effect of sampling on the performance of high-resource (English and French) and low-resource (Swahili and Urdu) languages for an XLM-100 model trained on Wikipedia (we observe a similar trend for the construction of the subword vocab). Specifically, we investigate the impact of varying the $\alpha $ parameter which controls the exponential smoothing of the language sampling rate. Similar to BIBREF1, we use a sampling rate proportional to the number of sentences in each corpus. Models trained with higher values of $\alpha $ see batches of high-resource languages more often. Figure shows that the higher the value of $\alpha $, the better the performance on high-resource languages, and vice-versa. When considering overall performance, we found $0.3$ to be an optimal value for $\alpha $, and use this for XLM-R.
<<</High-resource/Low-resource trade-off.>>>
<<<Importance of Capacity and Vocabulary Size.>>>
In previous sections and in Figure , we showed the importance of scaling the model size as we increase the number of languages. Similar to the overall model size, we argue that scaling the size of the shared vocabulary (the vocabulary capacity) can improve the performance of multilingual models on downstream tasks. To illustrate this effect, we train XLM-100 models on Wikipedia data with different vocabulary sizes. We keep the overall number of parameters constant by adjusting the width of the transformer. Figure shows that even with a fixed capacity, we observe a 2.8% increase in XNLI average accuracy as we increase the vocabulary size from 32K to 256K. This suggests that multilingual models can benefit from allocating a higher proportion of the total number of parameters to the embedding layer even though this reduces the size of the Transformer. With bigger models, we believe that using a vocabulary of up to 2 million tokens with an adaptive softmax BIBREF36, BIBREF37 should improve performance even further, but we leave this exploration to future work. For simplicity and given the computational constraints, we use a vocabulary of 250k for XLM-R.
We further illustrate the importance of this parameter, by training three models with the same transformer architecture (BERTBase) but with different vocabulary sizes: 128K, 256K and 512K. We observe more than 3% gains in overall accuracy on XNLI by simply increasing the vocab size from 128k to 512k.
<<</Importance of Capacity and Vocabulary Size.>>>
<<<Importance of large-scale training with more data.>>>
As shown in Figure , the CommonCrawl Corpus that we collected has significantly more monolingual data than the previously used Wikipedia corpora. Figure shows that for the same BERTBase architecture, all models trained on CommonCrawl obtain significantly better performance.
Apart from scaling the training data, BIBREF10 also showed the benefits of training MLMs longer. In our experiments, we observed similar effects of large-scale training, such as increasing batch size (see Figure ) and training time, on model performance. Specifically, we found that using validation perplexity as a stopping criterion for pretraining caused the multilingual MLM in BIBREF1 to be under-tuned. In our experience, performance on downstream tasks continues to improve even after validation perplexity has plateaued. Combining this observation with our implementation of the unsupervised XLM-MLM objective, we were able to improve the performance of BIBREF1 from 71.3% to more than 75% average accuracy on XNLI, which was on par with their supervised translation language modeling (TLM) objective. Based on these results, and given our focus on unsupervised learning, we decided to not use the supervised TLM objective for training our models.
<<</Importance of large-scale training with more data.>>>
<<<Simplifying multilingual tokenization with Sentence Piece.>>>
The different language-specific tokenization tools used by mBERT and XLM-100 make these models more difficult to use on raw text. Instead, we train a Sentence Piece model (SPM) and apply it directly on raw text data for all languages. We did not observe any loss in performance for models trained with SPM when compared to models trained with language-specific preprocessing and byte-pair encoding (see Figure ) and hence use SPM for XLM-R.
<<</Simplifying multilingual tokenization with Sentence Piece.>>>
<<</Improving and Understanding Multilingual Masked Language Models>>>
<<<Cross-lingual Understanding Results>>>
Based on these results, we adapt the setting of BIBREF1 and use a large Transformer model with 24 layers and 1024 hidden states, with a 250k vocabulary. We use the multilingual MLM loss and train our XLM-R model for 1.5 Million updates on five hundred 32GB Nvidia V100 GPUs with a batch size of 8192. We leverage the SPM-preprocessed text data from CommonCrawl in 100 languages and sample languages with $\alpha =0.3$. In this section, we show that it outperforms all previous techniques on cross-lingual benchmarks while getting performance on par with RoBERTa on the GLUE benchmark.
<<<XNLI.>>>
Table shows XNLI results and adds some additional details: (i) the number of models the approach induces (#M), (ii) the data on which the model was trained (D), and (iii) the number of languages the model was pretrained on (#lg). As we show in our results, these parameters significantly impact performance. Column #M specifies whether model selection was done separately on the dev set of each language ($N$ models), or on the joint dev set of all the languages (single model). We observe a 0.6 decrease in overall accuracy when we go from $N$ models to a single model - going from 71.3 to 70.7. We encourage the community to adopt this setting. For cross-lingual transfer, while this approach is not fully zero-shot transfer, we argue that in real applications, a small amount of supervised data is often available for validation in each language.
XLM-R sets a new state of the art on XNLI. On cross-lingual transfer, XLM-R obtains 80.1% accuracy, outperforming the XLM-100 and mBERT open-source models by 9.4% and 13.8% average accuracy. On the Swahili and Urdu low-resource languages, XLM-R outperforms XLM-100 by 13.8% and 9.3%, and mBERT by 21.6% and 13.7%. While XLM-R handles 100 languages, we also show that it outperforms the former state of the art Unicoder BIBREF17 and XLM (MLM+TLM), which handle only 15 languages, by 4.7% and 5% average accuracy respectively. Using the multilingual training of translate-train-all, XLM-R further improves performance and reaches 82.4% accuracy, a new overall state of the art for XNLI, outperforming Unicoder by 3.9%. Multilingual training is similar to practical applications where training sets are available in various languages for the same task. In the case of XNLI, datasets have been translated, and translate-train-all can be seen as some form of cross-lingual data augmentation BIBREF18, similar to back-translation BIBREF38.
<<</XNLI.>>>
<<<Question Answering.>>>
We also obtain new state of the art results on the MLQA cross-lingual question answering benchmark, introduced by BIBREF7. We follow their procedure by training on the English training data and evaluating on the 7 languages of the dataset. We report results in Table . XLM-R obtains F1 and accuracy scores of 70.0% and 52.2% while the previous state of the art was 61.6% and 43.5%. XLM-R also outperforms mBERT by 12.3% F1-score and 10.6% accuracy. It even outperforms BERT-Large on English, confirming its strong monolingual performance.
<<</Question Answering.>>>
<<</Cross-lingual Understanding Results>>>
<<<Multilingual versus Monolingual>>>
In this section, we present results of multilingual XLM models against monolingual BERT models.
<<<GLUE: XLM-R versus RoBERTa.>>>
Our goal is to obtain a multilingual model with strong performance on both, cross-lingual understanding tasks as well as natural language understanding tasks for each language. To that end, we evaluate XLM-R on the GLUE benchmark. We show in Table , that XLM-R obtains better average dev performance than BERTLarge by 1.3% and reaches performance on par with XLNetLarge. The RoBERTa model outperforms XLM-R by only 1.3% on average. We believe future work can reduce this gap even further by alleviating the curse of multilinguality and vocabulary dilution. These results demonstrate the possibility of learning one model for many languages while maintaining strong performance on per-language downstream tasks.
<<</GLUE: XLM-R versus RoBERTa.>>>
<<<XNLI: XLM versus BERT.>>>
A recurrent criticism against multilingual model is that they obtain worse performance than their monolingual counterparts. In addition to the comparison of XLM-R and RoBERTa, we provide the first comprehensive study to assess this claim on the XNLI benchmark. We extend our comparison between multilingual XLM models and monolingual BERT models on 7 languages and compare performance in Table . We train 14 monolingual BERT models on Wikipedia and CommonCrawl, and two XLM-7 models. We add slightly more capacity in the vocabulary size of the multilingual model for a better comparison. To our surprise - and backed by further study on internal benchmarks - we found that multilingual models can outperform their monolingual BERT counterparts. Specifically, in Table , we show that for cross-lingual transfer, monolingual baselines outperform XLM-7 for both Wikipedia and CC by 1.6% and 1.3% average accuracy. However, by making use of multilingual training (translate-train-all) and leveraging training sets coming from multiple languages, XLM-7 can outperform the BERT models: our XLM-7 trained on CC obtains 80.0% average accuracy on the 7 languages, while the average performance of monolingual BERT models trained on CC is 77.5%. This is a surprising result that shows that the capacity of multilingual models to leverage training data coming from multiple languages for a particular task can overcome the capacity dilution problem to obtain better overall performance.
<<</XNLI: XLM versus BERT.>>>
<<</Multilingual versus Monolingual>>>
<<<Representation Learning for Low-resource Languages>>>
We observed in Table that pretraining on Wikipedia for Swahili and Urdu performed similarly to a randomly initialized model; most likely due to the small size of the data for these languages. On the other hand, pretraining on CC improved performance by up to 10 points. This confirms our assumption that mBERT and XLM-100 rely heavily on cross-lingual transfer but do not model the low-resource languages as well as XLM-R. Specifically, in the translate-train-all setting, we observe that the biggest gains for XLM models trained on CC, compared to their Wikipedia counterparts, are on low-resource languages; 7% and 4.8% improvement on Swahili and Urdu respectively.
<<</Representation Learning for Low-resource Languages>>>
<<</Analysis and Results>>>
<<<Conclusion>>>
In this work, we introduced XLM-R, our new state of the art multilingual masked language model trained on 2.5 TB of newly created clean CommonCrawl data in 100 languages. We show that it provides strong gains over previous multilingual models like mBERT and XLM on classification, sequence labeling and question answering. We exposed the limitations of multilingual MLMs, in particular by uncovering the high-resource versus low-resource trade-off, the curse of multilinguality and the importance of key hyperparameters. We also expose the surprising effectiveness of multilingual models over monolingual models, and show strong improvements on low-resource languages.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Abstract, Evaluation"
],
"type": "disordered_section"
}
|
1911.02116
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Unsupervised Cross-lingual Representation Learning at Scale
<<<Abstract>>>
This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +13.8% average accuracy on XNLI, +12.3% average F1 score on MLQA, and +2.1% average F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 11.8% in XNLI accuracy for Swahili and 9.2% for Urdu over the previous XLM model. We also present a detailed empirical evaluation of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-Ris very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make XLM-R code, data, and models publicly available.
<<</Abstract>>>
<<<Introduction>>>
The goal of this paper is to improve cross-lingual language understanding (XLU), by carefully studying the effects of training unsupervised cross-lingual representations at a very large scale. We present XLM-R, a transformer-based multilingual masked language model pre-trained on text in 100 languages, which obtains state-of-the-art performance on cross-lingual classification, sequence labeling and question answering.
Multilingual masked language models (MLM) like mBERT BIBREF0 and XLM BIBREF1 have pushed the state-of-the-art on cross-lingual understanding tasks by jointly pretraining large Transformer models BIBREF2 on many languages. These models allow for effective cross-lingual transfer, as seen in a number of benchmarks including cross-lingual natural language inference BIBREF3, BIBREF4, BIBREF5, question answering BIBREF6, BIBREF7, and named entity recognition BIBREF8, BIBREF9. However, all of these studies pre-train on Wikipedia, which provides a relatively limited scale especially for lower resource languages.
In this paper, we first present a comprehensive analysis of the trade-offs and limitations of multilingual language models at scale, inspired by recent monolingual scaling efforts BIBREF10. We measure the trade-off between high-resource and low-resource languages and the impact of language sampling and vocabulary size. The experiments expose a trade-off as we scale the number of languages for a fixed model capacity: more languages leads to better cross-lingual performance on low-resource languages up until a point, after which the overall performance on monolingual and cross-lingual benchmarks degrades. We refer to this tradeoff as the curse of multilinguality, and show that it can be alleviated by simply increasing model capacity. We argue, however, that this remains an important limitation for future XLU systems which may aim to improve performance with more modest computational budgets.
Our best model XLM-RoBERTa (XLM-R) outperforms mBERT on cross-lingual classification by up to 21% accuracy on low-resource languages like Swahili and Urdu. It outperforms the previous state of the art by 3.9% average accuracy on XNLI, 2.1% average F1-score on Named Entity Recognition, and 8.4% average F1-score on cross-lingual Question Answering. We also evaluate monolingual fine tuning on the GLUE and XNLI benchmarks, where XLM-R obtains results competitive with state-of-the-art monolingual models, including RoBERTa BIBREF10. These results demonstrate, for the first time, that it is possible to have a single large model for all languages, without sacrificing per-language performance. We will make our code, models and data publicly available, with the hope that this will help research in multilingual NLP and low-resource language understanding.
<<</Introduction>>>
<<<Related Work>>>
From pretrained word embeddings BIBREF11, BIBREF12 to pretrained contextualized representations BIBREF13, BIBREF14 and transformer based language models BIBREF15, BIBREF0, unsupervised representation learning has significantly improved the state of the art in natural language understanding. Parallel work on cross-lingual understanding BIBREF16, BIBREF14, BIBREF1 extends these systems to more languages and to the cross-lingual setting in which a model is learned in one language and applied in other languages.
Most recently, BIBREF0 and BIBREF1 introduced mBERT and XLM - masked language models trained on multiple languages, without any cross-lingual supervision. BIBREF1 propose translation language modeling (TLM) as a way to leverage parallel data and obtain a new state of the art on the cross-lingual natural language inference (XNLI) benchmark BIBREF5. They further show strong improvements on unsupervised machine translation and pretraining for sequence generation. Separately, BIBREF8 demonstrated the effectiveness of multilingual models like mBERT on sequence labeling tasks. BIBREF17 showed gains over XLM using cross-lingual multi-task learning, and BIBREF18 demonstrated the efficiency of cross-lingual data augmentation for cross-lingual NLI. However, all of this work was at a relatively modest scale, in terms of the amount of training data, as compared to our approach.
The benefits of scaling language model pretraining by increasing the size of the model as well as the training data has been extensively studied in the literature. For the monolingual case, BIBREF19 show how large-scale LSTM models can obtain much stronger performance on language modeling benchmarks when trained on billions of tokens. GPT BIBREF15 also highlights the importance of scaling the amount of data and RoBERTa BIBREF10 shows that training BERT longer on more data leads to significant boost in performance. Inspired by RoBERTa, we show that mBERT and XLM are undertuned, and that simple improvements in the learning procedure of unsupervised MLM leads to much better performance. We train on cleaned CommonCrawls BIBREF20, which increase the amount of data for low-resource languages by two orders of magnitude on average. Similar data has also been shown to be effective for learning high quality word embeddings in multiple languages BIBREF21.
Several efforts have trained massively multilingual machine translation models from large parallel corpora. They uncover the high and low resource trade-off and the problem of capacity dilution BIBREF22, BIBREF23. The work most similar to ours is BIBREF24, which trains a single model in 103 languages on over 25 billion parallel sentences. BIBREF25 further analyze the representations obtained by the encoder of a massively multilingual machine translation system and show that it obtains similar results to mBERT on cross-lingual NLI. Our work, in contrast, focuses on the unsupervised learning of cross-lingual representations and their transfer to discriminative tasks.
<<</Related Work>>>
<<<Model and Data>>>
In this section, we present the training objective, languages, and data we use. We follow the XLM approach BIBREF1 as closely as possible, only introducing changes that improve performance at scale.
<<<Masked Language Models.>>>
We use a Transformer model BIBREF2 trained with the multilingual MLM objective BIBREF0, BIBREF1 using only monolingual data. We sample streams of text from each language and train the model to predict the masked tokens in the input. We apply subword tokenization directly on raw text data using Sentence Piece BIBREF26 with a unigram language model BIBREF27. We sample batches from different languages using the same sampling distribution as BIBREF1, but with $\alpha =0.3$. Unlike BIBREF1, we do not use language embeddings, which allows our model to better deal with code-switching. We use a large vocabulary size of 250K with a full softmax and train two different models: XLM-R Base (L = 12, H = 768, A = 12, 270M params) and XLM-R (L = 24, H = 1024, A = 16, 550M params). For all of our ablation studies, we use a BERTBase architecture with a vocabulary of 150K tokens. Appendix SECREF8 goes into more details about the architecture of the different models referenced in this paper.
<<</Masked Language Models.>>>
<<<Scaling to a hundred languages.>>>
XLM-R is trained on 100 languages; we provide a full list of languages and associated statistics in Appendix SECREF7. Figure specifies the iso codes of 88 languages that are shared across XLM-R and XLM-100, the model from BIBREF1 trained on Wikipedia text in 100 languages.
Compared to previous work, we replace some languages with more commonly used ones such as romanized Hindi and traditional Chinese. In our ablation studies, we always include the 7 languages for which we have classification and sequence labeling evaluation benchmarks: English, French, German, Russian, Chinese, Swahili and Urdu. We chose this set as it covers a suitable range of language families and includes low-resource languages such as Swahili and Urdu. We also consider larger sets of 15, 30, 60 and all 100 languages. When reporting results on high-resource and low-resource, we refer to the average of English and French results, and the average of Swahili and Urdu results respectively.
<<</Scaling to a hundred languages.>>>
<<<Scaling the Amount of Training Data.>>>
Following BIBREF20, we build a clean CommonCrawl Corpus in 100 languages. We use an internal language identification model in combination with the one from fastText BIBREF28. We train language models in each language and use it to filter documents as described in BIBREF20. We consider one CommonCrawl dump for English and twelve dumps for all other languages, which significantly increases dataset sizes, especially for low-resource languages like Burmese and Swahili.
Figure shows the difference in size between the Wikipedia Corpus used by mBERT and XLM-100, and the CommonCrawl Corpus we use. As we show in Section SECREF19, monolingual Wikipedia corpora are too small to enable unsupervised representation learning. Based on our experiments, we found that a few hundred MiB of text data is usually a minimal size for learning a BERT model.
<<</Scaling the Amount of Training Data.>>>
<<</Model and Data>>>
<<<Evaluation>>>
We consider four evaluation benchmarks. For cross-lingual understanding, we use cross-lingual natural language inference, named entity recognition, and question answering. We use the GLUE benchmark to evaluate the English performance of XLM-R and compare it to other state-of-the-art models.
<<<Cross-lingual Natural Language Inference (XNLI).>>>
The XNLI dataset comes with ground-truth dev and test sets in 15 languages, and a ground-truth English training set. The training set has been machine-translated to the remaining 14 languages, providing synthetic training data for these languages as well. We evaluate our model on cross-lingual transfer from English to other languages. We also consider three machine translation baselines: (i) translate-test: dev and test sets are machine-translated to English and a single English model is used (ii) translate-train (per-language): the English training set is machine-translated to each language and we fine-tune a multiligual model on each training set (iii) translate-train-all (multi-language): we fine-tune a multilingual model on the concatenation of all training sets from translate-train. For the translations, we use the official data provided by the XNLI project.
<<</Cross-lingual Natural Language Inference (XNLI).>>>
<<<Named Entity Recognition.>>>
For NER, we consider the CoNLL-2002 BIBREF29 and CoNLL-2003 BIBREF30 datasets in English, Dutch, Spanish and German. We fine-tune multilingual models either (1) on the English set to evaluate cross-lingual transfer, (2) on each set to evaluate per-language performance, or (3) on all sets to evaluate multilingual learning. We report the F1 score, and compare to baselines from BIBREF31 and BIBREF32.
<<</Named Entity Recognition.>>>
<<<Cross-lingual Question Answering.>>>
We use the MLQA benchmark from BIBREF7, which extends the English SQuAD benchmark to Spanish, German, Arabic, Hindi, Vietnamese and Chinese. We report the F1 score as well as the exact match (EM) score for cross-lingual transfer from English.
<<</Cross-lingual Question Answering.>>>
<<<GLUE Benchmark.>>>
Finally, we evaluate the English performance of our model on the GLUE benchmark BIBREF33 which gathers multiple classification tasks, such as MNLI BIBREF4, SST-2 BIBREF34, or QNLI BIBREF35. We use BERTLarge and RoBERTa as baselines.
<<</GLUE Benchmark.>>>
<<</Evaluation>>>
<<<Analysis and Results>>>
In this section, we perform a comprehensive analysis of multilingual masked language models. We conduct most of the analysis on XNLI, which we found to be representative of our findings on other tasks. We then present the results of XLM-R on cross-lingual understanding and GLUE. Finally, we compare multilingual and monolingual models, and present results on low-resource languages.
<<<Improving and Understanding Multilingual Masked Language Models>>>
Much of the work done on understanding the cross-lingual effectiveness of mBERT or XLM BIBREF8, BIBREF9, BIBREF7 has focused on analyzing the performance of fixed pretrained models on downstream tasks. In this section, we present a comprehensive study of different factors that are important to pretraining large scale multilingual models. We highlight the trade-offs and limitations of these models as we scale to one hundred languages.
<<<Transfer-dilution trade-off and Curse of Multilinguality.>>>
Model capacity (i.e. the number of parameters in the model) is constrained due to practical considerations such as memory and speed during training and inference. For a fixed sized model, the per-language capacity decreases as we increase the number of languages. While low-resource language performance can be improved by adding similar higher-resource languages during pretraining, the overall downstream performance suffers from this capacity dilution BIBREF24. Positive transfer and capacity dilution have to be traded off against each other.
We illustrate this trade-off in Figure , which shows XNLI performance vs the number of languages the model is pretrained on. Initially, as we go from 7 to 15 languages, the model is able to take advantage of positive transfer and this improves performance, especially on low resource languages. Beyond this point the curse of multilinguality kicks in and degrades performance across all languages. Specifically, the overall XNLI accuracy decreases from 71.8% to 67.7% as we go from XLM-7 to XLM-100. The same trend can be observed for models trained on the larger CommonCrawl Corpus.
The issue is even more prominent when the capacity of the model is small. To show this, we pretrain models on Wikipedia Data in 7, 30 and 100 languages. As we add more languages, we make the Transformer wider by increasing the hidden size from 768 to 960 to 1152. In Figure , we show that the added capacity allows XLM-30 to be on par with XLM-7, thus overcoming the curse of multilinguality. The added capacity for XLM-100, however, is not enough and it still lags behind due to higher vocabulary dilution (recall from Section SECREF3 that we used a fixed vocabulary size of 150K for all models).
<<</Transfer-dilution trade-off and Curse of Multilinguality.>>>
<<<High-resource/Low-resource trade-off.>>>
The allocation of the model capacity across languages is controlled by several parameters: the training set size, the size of the shared subword vocabulary, and the rate at which we sample training examples from each language. We study the effect of sampling on the performance of high-resource (English and French) and low-resource (Swahili and Urdu) languages for an XLM-100 model trained on Wikipedia (we observe a similar trend for the construction of the subword vocab). Specifically, we investigate the impact of varying the $\alpha $ parameter which controls the exponential smoothing of the language sampling rate. Similar to BIBREF1, we use a sampling rate proportional to the number of sentences in each corpus. Models trained with higher values of $\alpha $ see batches of high-resource languages more often. Figure shows that the higher the value of $\alpha $, the better the performance on high-resource languages, and vice-versa. When considering overall performance, we found $0.3$ to be an optimal value for $\alpha $, and use this for XLM-R.
<<</High-resource/Low-resource trade-off.>>>
<<<Importance of Capacity and Vocabulary Size.>>>
In previous sections and in Figure , we showed the importance of scaling the model size as we increase the number of languages. Similar to the overall model size, we argue that scaling the size of the shared vocabulary (the vocabulary capacity) can improve the performance of multilingual models on downstream tasks. To illustrate this effect, we train XLM-100 models on Wikipedia data with different vocabulary sizes. We keep the overall number of parameters constant by adjusting the width of the transformer. Figure shows that even with a fixed capacity, we observe a 2.8% increase in XNLI average accuracy as we increase the vocabulary size from 32K to 256K. This suggests that multilingual models can benefit from allocating a higher proportion of the total number of parameters to the embedding layer even though this reduces the size of the Transformer. With bigger models, we believe that using a vocabulary of up to 2 million tokens with an adaptive softmax BIBREF36, BIBREF37 should improve performance even further, but we leave this exploration to future work. For simplicity and given the computational constraints, we use a vocabulary of 250k for XLM-R.
We further illustrate the importance of this parameter, by training three models with the same transformer architecture (BERTBase) but with different vocabulary sizes: 128K, 256K and 512K. We observe more than 3% gains in overall accuracy on XNLI by simply increasing the vocab size from 128k to 512k.
<<</Importance of Capacity and Vocabulary Size.>>>
<<<Importance of large-scale training with more data.>>>
As shown in Figure , the CommonCrawl Corpus that we collected has significantly more monolingual data than the previously used Wikipedia corpora. Figure shows that for the same BERTBase architecture, all models trained on CommonCrawl obtain significantly better performance.
Apart from scaling the training data, BIBREF10 also showed the benefits of training MLMs longer. In our experiments, we observed similar effects of large-scale training, such as increasing batch size (see Figure ) and training time, on model performance. Specifically, we found that using validation perplexity as a stopping criterion for pretraining caused the multilingual MLM in BIBREF1 to be under-tuned. In our experience, performance on downstream tasks continues to improve even after validation perplexity has plateaued. Combining this observation with our implementation of the unsupervised XLM-MLM objective, we were able to improve the performance of BIBREF1 from 71.3% to more than 75% average accuracy on XNLI, which was on par with their supervised translation language modeling (TLM) objective. Based on these results, and given our focus on unsupervised learning, we decided to not use the supervised TLM objective for training our models.
<<</Importance of large-scale training with more data.>>>
<<<Simplifying multilingual tokenization with Sentence Piece.>>>
The different language-specific tokenization tools used by mBERT and XLM-100 make these models more difficult to use on raw text. Instead, we train a Sentence Piece model (SPM) and apply it directly on raw text data for all languages. We did not observe any loss in performance for models trained with SPM when compared to models trained with language-specific preprocessing and byte-pair encoding (see Figure ) and hence use SPM for XLM-R.
<<</Simplifying multilingual tokenization with Sentence Piece.>>>
<<</Improving and Understanding Multilingual Masked Language Models>>>
<<<Cross-lingual Understanding Results>>>
Based on these results, we adapt the setting of BIBREF1 and use a large Transformer model with 24 layers and 1024 hidden states, with a 250k vocabulary. We use the multilingual MLM loss and train our XLM-R model for 1.5 Million updates on five hundred 32GB Nvidia V100 GPUs with a batch size of 8192. We leverage the SPM-preprocessed text data from CommonCrawl in 100 languages and sample languages with $\alpha =0.3$. In this section, we show that it outperforms all previous techniques on cross-lingual benchmarks while getting performance on par with RoBERTa on the GLUE benchmark.
<<<XNLI.>>>
Table shows XNLI results and adds some additional details: (i) the number of models the approach induces (#M), (ii) the data on which the model was trained (D), and (iii) the number of languages the model was pretrained on (#lg). As we show in our results, these parameters significantly impact performance. Column #M specifies whether model selection was done separately on the dev set of each language ($N$ models), or on the joint dev set of all the languages (single model). We observe a 0.6 decrease in overall accuracy when we go from $N$ models to a single model - going from 71.3 to 70.7. We encourage the community to adopt this setting. For cross-lingual transfer, while this approach is not fully zero-shot transfer, we argue that in real applications, a small amount of supervised data is often available for validation in each language.
XLM-R sets a new state of the art on XNLI. On cross-lingual transfer, XLM-R obtains 80.1% accuracy, outperforming the XLM-100 and mBERT open-source models by 9.4% and 13.8% average accuracy. On the Swahili and Urdu low-resource languages, XLM-R outperforms XLM-100 by 13.8% and 9.3%, and mBERT by 21.6% and 13.7%. While XLM-R handles 100 languages, we also show that it outperforms the former state of the art Unicoder BIBREF17 and XLM (MLM+TLM), which handle only 15 languages, by 4.7% and 5% average accuracy respectively. Using the multilingual training of translate-train-all, XLM-R further improves performance and reaches 82.4% accuracy, a new overall state of the art for XNLI, outperforming Unicoder by 3.9%. Multilingual training is similar to practical applications where training sets are available in various languages for the same task. In the case of XNLI, datasets have been translated, and translate-train-all can be seen as some form of cross-lingual data augmentation BIBREF18, similar to back-translation BIBREF38.
<<</XNLI.>>>
<<<Question Answering.>>>
We also obtain new state of the art results on the MLQA cross-lingual question answering benchmark, introduced by BIBREF7. We follow their procedure by training on the English training data and evaluating on the 7 languages of the dataset. We report results in Table . XLM-R obtains F1 and accuracy scores of 70.0% and 52.2% while the previous state of the art was 61.6% and 43.5%. XLM-R also outperforms mBERT by 12.3% F1-score and 10.6% accuracy. It even outperforms BERT-Large on English, confirming its strong monolingual performance.
<<</Question Answering.>>>
<<</Cross-lingual Understanding Results>>>
<<<Multilingual versus Monolingual>>>
In this section, we present results of multilingual XLM models against monolingual BERT models.
<<<GLUE: XLM-R versus RoBERTa.>>>
Our goal is to obtain a multilingual model with strong performance on both, cross-lingual understanding tasks as well as natural language understanding tasks for each language. To that end, we evaluate XLM-R on the GLUE benchmark. We show in Table , that XLM-R obtains better average dev performance than BERTLarge by 1.3% and reaches performance on par with XLNetLarge. The RoBERTa model outperforms XLM-R by only 1.3% on average. We believe future work can reduce this gap even further by alleviating the curse of multilinguality and vocabulary dilution. These results demonstrate the possibility of learning one model for many languages while maintaining strong performance on per-language downstream tasks.
<<</GLUE: XLM-R versus RoBERTa.>>>
<<<XNLI: XLM versus BERT.>>>
A recurrent criticism against multilingual model is that they obtain worse performance than their monolingual counterparts. In addition to the comparison of XLM-R and RoBERTa, we provide the first comprehensive study to assess this claim on the XNLI benchmark. We extend our comparison between multilingual XLM models and monolingual BERT models on 7 languages and compare performance in Table . We train 14 monolingual BERT models on Wikipedia and CommonCrawl, and two XLM-7 models. We add slightly more capacity in the vocabulary size of the multilingual model for a better comparison. To our surprise - and backed by further study on internal benchmarks - we found that multilingual models can outperform their monolingual BERT counterparts. Specifically, in Table , we show that for cross-lingual transfer, monolingual baselines outperform XLM-7 for both Wikipedia and CC by 1.6% and 1.3% average accuracy. However, by making use of multilingual training (translate-train-all) and leveraging training sets coming from multiple languages, XLM-7 can outperform the BERT models: our XLM-7 trained on CC obtains 80.0% average accuracy on the 7 languages, while the average performance of monolingual BERT models trained on CC is 77.5%. This is a surprising result that shows that the capacity of multilingual models to leverage training data coming from multiple languages for a particular task can overcome the capacity dilution problem to obtain better overall performance.
<<</XNLI: XLM versus BERT.>>>
<<</Multilingual versus Monolingual>>>
<<<Representation Learning for Low-resource Languages>>>
We observed in Table that pretraining on Wikipedia for Swahili and Urdu performed similarly to a randomly initialized model; most likely due to the small size of the data for these languages. On the other hand, pretraining on CC improved performance by up to 10 points. This confirms our assumption that mBERT and XLM-100 rely heavily on cross-lingual transfer but do not model the low-resource languages as well as XLM-R. Specifically, in the translate-train-all setting, we observe that the biggest gains for XLM models trained on CC, compared to their Wikipedia counterparts, are on low-resource languages; 7% and 4.8% improvement on Swahili and Urdu respectively.
<<</Representation Learning for Low-resource Languages>>>
<<</Analysis and Results>>>
<<<Conclusion>>>
In this work, we introduced XLM-R, our new state of the art multilingual masked language model trained on 2.5 TB of newly created clean CommonCrawl data in 100 languages. We show that it provides strong gains over previous multilingual models like mBERT and XLM on classification, sequence labeling and question answering. We exposed the limitations of multilingual MLMs, in particular by uncovering the high-resource versus low-resource trade-off, the curse of multilinguality and the importance of key hyperparameters. We also expose the surprising effectiveness of multilingual models over monolingual models, and show strong improvements on low-resource languages.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Related Work, Abstract"
],
"type": "disordered_section"
}
|
1912.03184
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
GoodNewsEveryone: A Corpus of News Headlines Annotated with Emotions, Semantic Roles, and Reader Perception
<<<Abstract>>>
Most research on emotion analysis from text focuses on the task of emotion classification or emotion intensity regression. Fewer works address emotions as structured phenomena, which can be explained by the lack of relevant datasets and methods. We fill this gap by releasing a dataset of 5000 English news headlines annotated via crowdsourcing with their dominant emotions, emotion experiencers and textual cues, emotion causes and targets, as well as the reader's perception and emotion of the headline. We propose a multiphase annotation procedure which leads to high quality annotations on such a task via crowdsourcing. Finally, we develop a baseline for the task of automatic prediction of structures and discuss results. The corpus we release enables further research on emotion classification, emotion intensity prediction, emotion cause detection, and supports further qualitative studies.
<<</Abstract>>>
<<<Introduction>>>
Research in emotion analysis from text focuses on mapping words, sentences, or documents to emotion categories based on the models of Ekman1992 or Plutchik2001, which propose the emotion classes of joy, sadness, anger, fear, trust, disgust, anticipation and surprise. Emotion analysis has been applied to a variety of tasks including large scale social media mining BIBREF0, literature analysis BIBREF1, BIBREF2, lyrics and music analysis BIBREF3, BIBREF4, and the analysis of the development of emotions over time BIBREF5.
There are at least two types of questions which cannot yet be answered by these emotion analysis systems. Firstly, such systems do not often explicitly model the perspective of understanding the written discourse (reader, writer, or the text's point of view). For example, the headline “Djokovic happy to carry on cruising” BIBREF6 contains an explicit mention of joy carried by the word “happy”. However, it may evoke different emotions in a reader (e. g., the reader is a supporter of Roger Federer), and the same applies to the author of the headline. To the best of our knowledge, only one work takes this point into consideration BIBREF7. Secondly, the structure that can be associated with the emotion description in text is not uncovered. Questions like: “Who feels a particular emotion?” or “What causes that emotion?” still remain unaddressed. There has been almost no work in this direction, with only few exceptions in English BIBREF8, BIBREF9 and Mandarin BIBREF10, BIBREF11.
With this work, we argue that emotion analysis would benefit from a more fine-grained analysis that considers the full structure of an emotion, similar to the research in aspect-based sentiment analysis BIBREF12, BIBREF13, BIBREF14, BIBREF15.
Consider the headline: “A couple infuriated officials by landing their helicopter in the middle of a nature reserve” BIBREF16 depicted on Figure FIGREF1. One could mark “officials” as the experiencer, “a couple” as the target, and “landing their helicopter in the middle of a nature reserve” as the cause of anger. Now let us imagine that the headline starts with “A cheerful couple” instead of “A couple”. A simple approach to emotion detection based on cue words will capture that this sentence contains descriptions of anger (“infuriated”) and joy (“cheerful”). It would, however, fail in attributing correct roles to the couple and the officials, thus, the distinction between their emotion experiences would remain hidden from us.
In this study, we focus on an annotation task with the goal of developing a dataset that would enable addressing the issues raised above. Specifically, we introduce the corpus GoodNewsEveryone, a novel dataset of news English headlines collected from 82 different sources analyzed in the Media Bias Chart BIBREF17 annotated for emotion class, emotion intensity, semantic roles (experiencer, cause, target, cue), and reader perspective. We use semantic roles, since identifying who feels what and why is essentially a semantic role labeling task BIBREF18. The roles we consider are a subset of those defined for the semantic frame for “Emotion” in FrameNet BIBREF19.
We focus on news headlines due to their brevity and density of contained information. Headlines often appeal to a reader's emotions, and hence are a potential good source for emotion analysis. In addition, news headlines are easy-to-obtain data across many languages, void of data privacy issues associated with social media and microblogging.
Our contributions are: (1) we design a two phase annotation procedure for emotion structures via crowdsourcing, (2) present the first resource of news headlines annotated for emotions, cues, intensity, experiencers, causes, targets, and reader emotion, and, (3), provide results of a baseline model to predict such roles in a sequence labeling setting. We provide our annotations at http://www.romanklinger.de/data-sets/GoodNewsEveryone.zip.
<<</Introduction>>>
<<<Related Work>>>
Our annotation is built upon different tasks and inspired by different existing resources, therefore it combines approaches from each of those. In what follows, we look at related work on each task and specify how it relates to our new corpus.
<<<Emotion Classification>>>
Emotion classification deals with mapping words, sentences, or documents to a set of emotions following psychological models such as those proposed by Ekman1992 (anger, disgust, fear, joy, sadness and surprise) or Plutchik2001; or continuous values of valence, arousal and dominance BIBREF20.
One way to create annotated datasets is via expert annotation BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF7. The creators of the ISEAR dataset make use of self-reporting instead, where subjects are asked to describe situations associated with a specific emotion BIBREF25. Crowdsourcing is another popular way to acquire human judgments BIBREF26, BIBREF9, BIBREF9, BIBREF27, BIBREF28. Another recent dataset for emotion recognition reproduces the ISEAR dataset in a crowdsourcing setting for both English and German BIBREF29. Lastly, social network platforms play a central role in data acquisition with distant supervision, because they provide a cheap way to obtain large amounts of noisy data BIBREF26, BIBREF9, BIBREF30, BIBREF31. Table TABREF3 shows an overview of resources. More details could be found in Bostan2018.
<<</Emotion Classification>>>
<<<Emotion Intensity>>>
In emotion intensity prediction, the term intensity refers to the degree an emotion is experienced. For this task, there are only a few datasets available. To our knowledge, the first dataset annotated for emotion intensity is by Aman2007, who ask experts for ratings, followed by the datasets released for the EmoInt shared tasks BIBREF32, BIBREF28, both annotated via crowdsourcing through the best-worst scaling. The annotation task can also be formalized as a classification task, similarly to the emotion classification task, where the goal would be to map some textual input to a class from a set of predefined classes of emotion intensity categories. This approach is used by Aman2007, where they annotate high, moderate, and low.
<<</Emotion Intensity>>>
<<<Cue or Trigger Words>>>
The task of finding a function that segments a textual input and finds the span indicating an emotion category is less researched. Cue or trigger words detection could also be formulated as an emotion classification task for which the set of classes to be predicted is extended to cover other emotion categories with cues. First work that annotated cues was done manually by one expert and three annotators on the domain of blog posts BIBREF21. Mohammad2014 annotates the cues of emotions in a corpus of $4,058$ electoral tweets from US via crowdsourcing. Similar in annotation procedure, Yan2016emocues curate a corpus of 15,553 tweets and annotate it with 28 emotion categories, valence, arousal, and cues.
To the best of our knowledge, there is only one work BIBREF8 that leverages the annotations for cues and considers the task of emotion detection where the exact spans that represent the cues need to be predicted.
<<</Cue or Trigger Words>>>
<<<Emotion Cause Detection>>>
Detecting the cause of an expressed emotion in text received relatively little attention, compared to emotion detection. There are only few works on English that focus on creating resources to tackle this task BIBREF23, BIBREF9, BIBREF8, BIBREF33. The task can be formulated in different ways. One is to define a closed set of potential causes after annotation. Then, cause detection is a classification task BIBREF9. Another setting is to find the cause in the text. This is formulated as segmentation or clause classification BIBREF23, BIBREF8. Finding the cause of an emotion is widely researched on Mandarin in both resource creation and methods. Early works build on rule-based systems BIBREF34, BIBREF35, BIBREF36 which examine correlations between emotions and cause events in terms of linguistic cues. The works that follow up focus on both methods and corpus construction, showing large improvements over the early works BIBREF37, BIBREF38, BIBREF33, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF43, BIBREF11. The most recent work on cause extraction is being done on Mandarin and formulates the task jointly with emotion detection BIBREF10, BIBREF44, BIBREF45. With the exception of Mohammad2014 who is annotating via crowdsourcing, all other datasets are manually labeled, usually by using the W3C Emotion Markup Language.
<<</Emotion Cause Detection>>>
<<<Semantic Role Labeling of Emotions>>>
Semantic role labeling in the context of emotion analysis deals with extracting who feels (experiencer) which emotion (cue, class), towards whom the emotion is expressed (target), and what is the event that caused the emotion (stimulus). The relations are defined akin to FrameNet's Emotion frame BIBREF19.
There are two works that work on annotation of semantic roles in the context of emotion. Firstly, Mohammad2014 annotate a dataset of $4,058$ tweets via crowdsourcing. The tweets were published before the U.S. presidential elections in 2012. The semantic roles considered are the experiencer, the stimulus, and the target. However, in the case of tweets, the experiencer is mostly the author of the tweet. Secondly, Kim2018 annotate and release REMAN (Relational EMotion ANnotation), a corpus of $1,720$ paragraphs based on Project Gutenberg. REMAN was manually annotated for spans which correspond to emotion cues and entities/events in the roles of experiencers, targets, and causes of the emotion. They also provide baseline results for the automatic prediction of these structures and show that their models benefit from joint modeling of emotions with its roles in all subtasks. Our work follows in motivation Kim2018 and in procedure Mohammad2014.
<<</Semantic Role Labeling of Emotions>>>
<<<Reader vs. Writer vs. Text Perspective>>>
Studying the impact of different annotation perspectives is another little explored area. There are few exceptions in sentiment analysis which investigate the relation between sentiment of a blog post and the sentiment of their comments BIBREF46 or model the emotion of a news reader jointly with the emotion of a comment writer BIBREF47.
Fewer works exist in the context of emotion analysis. 5286061 deal with writer's and reader's emotions on online blogs and find that positive reader emotions tend to be linked to positive writer emotions. Buechel2017b and buechel-hahn-2017-emobank look into the effects of different perspectives on annotation quality and find that the reader perspective yields better inter-annotator agreement values.
<<</Reader vs. Writer vs. Text Perspective>>>
<<</Related Work>>>
<<<Data Collection & Annotation>>>
We gather the data in three steps: (1) collecting the news and the reactions they elicit in social media, (2) filtering the resulting set to retain relevant items, and (3) sampling the final selection using various metrics.
The headlines are then annotated via crowdsourcing in two phases by three annotators in the first phase and by five annotators in the second phase. As a last step, the annotations are adjudicated to form the gold standard. We describe each step in detail below.
<<<Collecting Headlines>>>
The first step consists of retrieving news headlines from the news publishers. We further retrieve content related to a news item from social media: tweets mentioning the headlines together with replies and Reddit posts that link to the headlines. We use this additional information for subsampling described later.
We manually select all news sources available as RSS feeds (82 out of 124) from the Media Bias Chart BIBREF48, a project that analyzes reliability (from original fact reporting to containing inaccurate/fabricated information) and political bias (from most extreme left to most extreme right) of U.S. news sources.
Our news crawler retrieved daily headlines from the feeds, together with the attached metadata (title, link, and summary of the news article) from March 2019 until October 2019. Every day, after the news collection finished, Twitter was queried for 50 valid tweets for each headline. In addition to that, for each collected tweet, we collect all valid replies and counts of being favorited, retweeted and replied to in the first 24 hours after its publication.
The last step in the pipeline is aquiring the top (“hot”) submissions in the /r/news, /r/worldnews subreddits, and their metadata, including the number of up and downvotes, upvote ratio, number of comments, and comments themselves.
<<</Collecting Headlines>>>
<<<Filtering & Postprocessing>>>
We remove any headlines that have less than 6 tokens (e. g., “Small or nothing”, “But Her Emails”, “Red for Higher Ed”), as well as those starting with certain phrases, such as “Ep.”,“Watch Live:”, “Playlist:”, “Guide to”, and “Ten Things”. We also filter-out headlines that contain a date (e. g., “Headlines for March 15, 2019”) and words from the headlines which refer to visual content, like “video”, “photo”, “image”, “graphic”, “watch”, etc.
<<</Filtering & Postprocessing>>>
<<<Sampling Headlines>>>
We stratify the remaining headlines by source (150 headlines from each source) and subsample equally according to the following strategies: 1) randomly select headlines, 2) select headlines with high count of emotion terms, 3) select headlines that contain named entities, and 4) select the headlines with high impact on social media. Table TABREF16 shows how many headlines are selected by each sampling method in relation to the most dominant emotion (see Section SECREF25).
<<<Random Sampling.>>>
The goal of the first sampling method is to collect a random sample of headlines that is representative and not biased towards any source or content type. Note that the sample produced using this strategy might not be as rich with emotional content as the other samples.
<<</Random Sampling.>>>
<<<Sampling via NRC.>>>
For the second sampling strategy we hypothesize that headlines containing emotionally charged words are also likely to contain the structures we aim to annotate. This strategy selects headlines whose words are in the NRC dictionary BIBREF49.
<<</Sampling via NRC.>>>
<<<Sampling Entities.>>>
We further hypothesize that headlines that mention named entities may also contain experiencers or targets of emotions, and therefore, they are likely to present a complete emotion structure. This sampling method yields headlines that contain at least one entity name, according to the recognition from spaCy that is trained on OntoNotes 5 and on Wikipedia corpus. We consider organization names, persons, nationalities, religious, political groups, buildings, countries, and other locations.
<<</Sampling Entities.>>>
<<<Sampling based on Reddit & Twitter.>>>
The last sampling strategy involves our Twitter and Reddit metadata. This enables us to select and sample headlines based on their impact on social media (under the assumption that this correlates with emotion connotation of the headline). This strategy chooses them equally from the most favorited tweets, most retweeted headlines on Twitter, most replied to tweets on Twitter, as well as most upvoted and most commented on posts on Reddit.
<<</Sampling based on Reddit & Twitter.>>>
<<</Sampling Headlines>>>
<<<Annotation Procedure>>>
Using these sampling and filtering methods, we select $9,932$ headlines. Next, we set up two questionnaires (see Table TABREF17) for the two annotation phases that we describe below. We use Figure Eight.
<<<Phase 1: Selecting Emotional Headlines>>>
The first questionnaire is meant to determine the dominant emotion of a headline, if that exists, and whether the headline triggers an emotion in a reader. We hypothesize that these two questions help us to retain only relevant headlines for the next, more expensive, annotation phase.
During this phase, $9,932$ headlines were annotated by three annotators. The first question of the first phase (P1Q1) is: “Which emotion is most dominant in the given headline?” and annotators are provided a closed list of 15 emotion categories to which the category No emotion was added. The second question (P1Q2) aims to answer whether a given headline would stir up an emotion in most readers and the annotators are provided with only two possible answers (yes or no, see Table TABREF17 and Figure FIGREF1 for details).
Our set of 15 emotion categories is an extended set over Plutchik's emotion classes and comprises anger, annoyance, disgust, fear, guilt, joy, love, pessimism, negative surprise, optimism, positive surprise, pride, sadness, shame, and trust. Such a diverse set of emotion labels is meant to provide a more fine-grained analysis and equip the annotators with a wider range of answer choices.
<<</Phase 1: Selecting Emotional Headlines>>>
<<<Phase 2: Emotion and Role Annotation>>>
The annotations collected during the first phase are automatically ranked and the ranking is used to decide which headlines are further annotated in the second phase. Ranking consists of sorting by agreement on P1Q1, considering P1Q2 in the case of ties.
The top $5,000$ ranked headlines are annotated by five annotators for emotion class, intensity, reader emotion, and other emotions in case there is not only a dominant emotion. Along with these closed annotation tasks, the annotators are asked to answer several open questions, namely (1) who is the experiencer of the emotion (if mentioned), (2) what event triggered the annotated emotion (if mentioned), (3) if the emotion had a target, and (4) who or what is the target. The annotators are free to select multiple instances related to the dominant emotion by copy-paste into the answer field. For more details on the exact questions and example of answers, see Table TABREF17. Figure FIGREF1 shows a depiction of the procedure.
<<</Phase 2: Emotion and Role Annotation>>>
<<<Quality Control and Results>>>
To control the quality, we ensured that a single annotator annotates maximum 120 headlines (this protects the annotators from reading too many news headlines and from dominating the annotations). Secondly, we let only annotators who geographically reside in the U.S. contribute to the task.
We test the annotators on a set of $1,100$ test questions for the first phase (about 10% of the data) and 500 for the second phase. Annotators were required to pass 95%. The questions were generated based on hand-picked non-ambiguous real headlines through swapping out relevant words from the headline in order to obtain a different annotation, for instance, for “Djokovic happy to carry on cruising”, we would swap “Djokovic” with a different entity, the cue “happy” to a different emotion expression.
Further, we exclude Phase 1 annotations that were done in less than 10 seconds and Phase 2 annotations that were done in less than 70 seconds.
After we collected all annotations, we found unreliable annotators for both phases in the following way: for each annotator and for each question, we compute the probability with which the annotator agrees with the response chosen by the majority. If the computed probability is more than two standard deviations away from the mean we discard all annotations done by that annotator.
On average, 310 distinct annotators needed 15 seconds in the first phase. We followed the guidelines of the platform regarding payment and decided to pay for each judgment $$0.02$ (USD) for Phase 1 (total of $$816.00$ USD). For the second phase, 331 distinct annotators needed on average $\approx $1:17 minutes to perform one judgment. Each judgment was paid with $0.08$$ USD (total $$2,720.00$ USD).
<<</Quality Control and Results>>>
<<</Annotation Procedure>>>
<<<Adjudication of Annotations>>>
In this section, we describe the adjudication process we undertook to create the gold dataset and the difficulties we faced in creating a gold set out of the collected annotations.
The first step was to discard obviously wrong annotations for open questions, such as annotations in other languages than English, or annotations of spans that were not part of the headline. In the next step, we incrementally apply a set of rules to the annotated instances in a one-or-nothing fashion. Specifically, we incrementally test each instance for a number of criteria in such a way that if at least one criteria is satisfied the instance is accepted and its adjudication is finalized. Instances that do not satisfy at least one criterium are adjudicated manually.
<<<Relative Majority Rule.>>>
This filter is applied to all questions regardless of their type. Effectively, whenever an entire annotation is agreed upon by at least two annotators, we use all parts of this annotation as the gold annotation. Given the headline depicted in Figure FIGREF1 with the following target role annotations by different annotators: “A couple”, “None”, “A couple”, “officials”, “their helicopter”. The resulting gold annotation is “A couple” and the adjudication process for the target ends.
<<</Relative Majority Rule.>>>
<<<Most Common Subsequence Rule.>>>
This rule is only applied to open text questions. It takes the most common smallest string intersection of all annotations. In the headline above, the experiencer annotations “A couple”, “infuriated officials”, “officials”, “officials”, “infuriated officials” would lead to “officials”.
<<</Most Common Subsequence Rule.>>>
<<<Longest Common Subsequence Rule.>>>
This rule is only applied two different intersections are the most common (previous rule), and these two intersect. We then accept the longest common subsequence. Revisiting the example for deciding on the cause role with the annotations “by landing their helicopter in the nature reserve”, “by landing their helicopter”, “landing their helicopter in the nature reserve”, “a couple infuriated officials”, “infuriated” the adjudicated gold is “landing their helicopter in the nature reserve”.
Table TABREF27 shows through examples of how each rule works and how many instances are “solved” by each adjudication rule.
<<</Longest Common Subsequence Rule.>>>
<<<Noun Chunks>>>
For the role of experiencer, we accept only the most-common noun-chunk(s).
The annotations that are left after being processed by all the rules described above are being adjudicated manually by the authors of the paper. We show examples for all roles in Table TABREF29.
<<</Noun Chunks>>>
<<</Adjudication of Annotations>>>
<<</Data Collection & Annotation>>>
<<<Analysis>>>
<<<Inter-Annotator Agreement>>>
We calculate the agreement on the full set of annotations from each phase for the two question types, namely open vs. closed, where the first deal with emotion classification and second with the roles cue, experiencer, cause, and target.
<<<Emotion>>>
We use Fleiss' Kappa ($\kappa $) to measure the inter-annotator agreement for closed questions BIBREF50, BIBREF51. In addition, we report the average percentage of overlaps between all pairs of annotators (%) and the mean entropy of annotations in bits. Higher agreement correlates with lower entropy. As Table TABREF38 shows, the agreement on the question whether a headline is emotional or not obtains the highest agreement ($0.34$), followed by the question on intensity ($0.22$). The lowest agreement is on the question to find the most dominant emotion ($0.09$).
All metrics show comparably low agreement on the closed questions, especially on the question of the most dominant emotion. This is reasonable, given that emotion annotation is an ambiguous, subjective, and difficult task. This aspect lead to the decision of not purely calculating a majority vote label but to consider the diversity in human interpretation of emotion categories and publish the annotations by all annotators.
Table TABREF40 shows the counts of annotators agreeing on a particular emotion. We observe that Love, Pride, and Sadness show highest intersubjectivity followed closely by Fear and Joy. Anger and Annoyance show, given their similarity, lower scores. Note that the micro average of the basic emotions (+ love) is $0.21$ for when more than five annotators agree.
<<</Emotion>>>
<<<Roles>>>
Table TABREF41 presents the mean of pair-wise inter-annotator agreement for each role. We report average pair-wise Fleiss' $\kappa $, span-based exact $\textrm {F}_1$ over the annotated spans, accuracy, proportional token overlap, and the measure of agreement on set-valued items, MASI BIBREF52.
We observe a fair agreement on the open annotation tasks. The highest agreement is for the role of the Experiencer, followed by Cue, Cause, and Target.
This seems to correlate with the length of the annotated spans (see Table TABREF42). This finding is consistent with Kim2018. Presumably, Experiencers are easier to annotate as they often are noun phrases whereas causes can be convoluted relative clauses.
<<</Roles>>>
<<</Inter-Annotator Agreement>>>
<<<General Corpus Statistics>>>
In the following, we report numbers of the adjudicated data set for simplicity of discussion. Please note that we publish all annotations by all annotators and suggest that computational models should consider the distribution of annotations instead of one adjudicated gold. The latter for be a simplification which we consider to not be appropriate.
GoodNewsEveryone contains $5,000$ headlines from various news sources described in the Media Bias Chart BIBREF17. Overall, the corpus is composed of $56,612$ words ($354,173$ characters) out of which $17,513$ are unique. The headline length is short with 11 words on average. The shortest headline contains 6 words while the longest headline contains 32 words. The length of a headline in characters ranges from 24 the shortest to 199 the longest.
Table TABREF42 presents the total number of adjudicated annotations for each role in relation to the dominant emotion. GoodNewsEveryone consists of $5,000$ headlines, $3,312$ of which have annotated dominant emotion via majority vote. The rest of $1,688$ headlines (up to $5,000$) ended in ties for the most dominant emotion category and were adjudicated manually. The emotion category Negative Surprise has the highest number of annotations, while Love has the lowest number of annotations. In most cases, Cues are single tokens (e. g., “infuriates”, “slams”), Cause has the largest proportion of annotations that span more than seven tokens on average (65% out of all annotations in this category),
For the role of Experiencer, we see the lowest number of annotations (19%), which is a very different result to the one presented by Kim2018, where the role Experiencer was the most annotated. We hypothesize that this is the effect of the domain we annotated; it is more likely to encounter explicit experiencers in literature (as literary characters) than in news headlines. As we can see, the cue and the cause relations dominate the dataset (27% each), followed by Target (25%) relations.
Table TABREF42 also shows how many times each emotion triggered a certain relation. In this sense, Negative Surprise and Positive Surprise has triggered the most Experiencer, and Cause and Target relations, which due to the prevalence of the annotations for this emotion in the dataset.
Further, Figure FIGREF44, shows the distances of the different roles from the cue. The causes and targets are predominantly realized right of the cue, while the experiencer occurs more often left of the cue.
<<</General Corpus Statistics>>>
<<</Analysis>>>
<<<Baseline>>>
As an estimate for the difficulty of the task, we provide baseline results. We formulate the task as sequence labeling of emotion cues, mentions of experiencers, targets, and causes with a bidirectional long short-term memory networks with a CRF layer (biLSTM-CRF) that uses Elmo embeddings as input and an IOB alphabet as output. The results are shown in Table TABREF45.
<<</Baseline>>>
<<<Conclusion & Future Work>>>
We introduce GoodNewsEveryone, a corpus of $5,000$ headlines annotated for emotion categories, semantic roles, and reader perspective. Such a dataset enables answering instance-based questions, such as, “who is experiencing what emotion and why?” or more general questions, like “what are typical causes of joy in media?”. To annotate the headlines, we employ a two-phase procedure and use crowdsourcing. To obtain a gold dataset, we aggregate the annotations through automatic heuristics.
As the evaluation of the inter-annotator agreement and the baseline model results show, the task of annotating structures encompassing emotions with the corresponding roles is a very difficult one.
However, we also note that developing such a resource via crowdsourcing has its limitations, due to the subjective nature of emotions, it is very challenging to come up with an annotation methodology that would ensure less dissenting annotations for the domain of headlines.
We release the raw dataset, the aggregated gold dataset, the carefully designed questionnaires, and baseline models as a freely available repository (partially only after acceptance of the paper). The released dataset will be useful for social science scholars, since it contains valuable information about the interactions of emotions in news headlines, and gives interesting insights into the language of emotion expression in media. Note that this dataset is also useful since it introduces a new dataset to test on structured prediction models. We are currently investigating the dataset for understanding the interaction between media bias and annotated emotions and roles.
<<</Conclusion & Future Work>>>
<<</Title>>>
|
{
"references": [
"Introduction, Data Collection & Annotation"
],
"type": "disordered_section"
}
|
1912.03184
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
GoodNewsEveryone: A Corpus of News Headlines Annotated with Emotions, Semantic Roles, and Reader Perception
<<<Abstract>>>
Most research on emotion analysis from text focuses on the task of emotion classification or emotion intensity regression. Fewer works address emotions as structured phenomena, which can be explained by the lack of relevant datasets and methods. We fill this gap by releasing a dataset of 5000 English news headlines annotated via crowdsourcing with their dominant emotions, emotion experiencers and textual cues, emotion causes and targets, as well as the reader's perception and emotion of the headline. We propose a multiphase annotation procedure which leads to high quality annotations on such a task via crowdsourcing. Finally, we develop a baseline for the task of automatic prediction of structures and discuss results. The corpus we release enables further research on emotion classification, emotion intensity prediction, emotion cause detection, and supports further qualitative studies.
<<</Abstract>>>
<<<Introduction>>>
Research in emotion analysis from text focuses on mapping words, sentences, or documents to emotion categories based on the models of Ekman1992 or Plutchik2001, which propose the emotion classes of joy, sadness, anger, fear, trust, disgust, anticipation and surprise. Emotion analysis has been applied to a variety of tasks including large scale social media mining BIBREF0, literature analysis BIBREF1, BIBREF2, lyrics and music analysis BIBREF3, BIBREF4, and the analysis of the development of emotions over time BIBREF5.
There are at least two types of questions which cannot yet be answered by these emotion analysis systems. Firstly, such systems do not often explicitly model the perspective of understanding the written discourse (reader, writer, or the text's point of view). For example, the headline “Djokovic happy to carry on cruising” BIBREF6 contains an explicit mention of joy carried by the word “happy”. However, it may evoke different emotions in a reader (e. g., the reader is a supporter of Roger Federer), and the same applies to the author of the headline. To the best of our knowledge, only one work takes this point into consideration BIBREF7. Secondly, the structure that can be associated with the emotion description in text is not uncovered. Questions like: “Who feels a particular emotion?” or “What causes that emotion?” still remain unaddressed. There has been almost no work in this direction, with only few exceptions in English BIBREF8, BIBREF9 and Mandarin BIBREF10, BIBREF11.
With this work, we argue that emotion analysis would benefit from a more fine-grained analysis that considers the full structure of an emotion, similar to the research in aspect-based sentiment analysis BIBREF12, BIBREF13, BIBREF14, BIBREF15.
Consider the headline: “A couple infuriated officials by landing their helicopter in the middle of a nature reserve” BIBREF16 depicted on Figure FIGREF1. One could mark “officials” as the experiencer, “a couple” as the target, and “landing their helicopter in the middle of a nature reserve” as the cause of anger. Now let us imagine that the headline starts with “A cheerful couple” instead of “A couple”. A simple approach to emotion detection based on cue words will capture that this sentence contains descriptions of anger (“infuriated”) and joy (“cheerful”). It would, however, fail in attributing correct roles to the couple and the officials, thus, the distinction between their emotion experiences would remain hidden from us.
In this study, we focus on an annotation task with the goal of developing a dataset that would enable addressing the issues raised above. Specifically, we introduce the corpus GoodNewsEveryone, a novel dataset of news English headlines collected from 82 different sources analyzed in the Media Bias Chart BIBREF17 annotated for emotion class, emotion intensity, semantic roles (experiencer, cause, target, cue), and reader perspective. We use semantic roles, since identifying who feels what and why is essentially a semantic role labeling task BIBREF18. The roles we consider are a subset of those defined for the semantic frame for “Emotion” in FrameNet BIBREF19.
We focus on news headlines due to their brevity and density of contained information. Headlines often appeal to a reader's emotions, and hence are a potential good source for emotion analysis. In addition, news headlines are easy-to-obtain data across many languages, void of data privacy issues associated with social media and microblogging.
Our contributions are: (1) we design a two phase annotation procedure for emotion structures via crowdsourcing, (2) present the first resource of news headlines annotated for emotions, cues, intensity, experiencers, causes, targets, and reader emotion, and, (3), provide results of a baseline model to predict such roles in a sequence labeling setting. We provide our annotations at http://www.romanklinger.de/data-sets/GoodNewsEveryone.zip.
<<</Introduction>>>
<<<Related Work>>>
Our annotation is built upon different tasks and inspired by different existing resources, therefore it combines approaches from each of those. In what follows, we look at related work on each task and specify how it relates to our new corpus.
<<<Emotion Classification>>>
Emotion classification deals with mapping words, sentences, or documents to a set of emotions following psychological models such as those proposed by Ekman1992 (anger, disgust, fear, joy, sadness and surprise) or Plutchik2001; or continuous values of valence, arousal and dominance BIBREF20.
One way to create annotated datasets is via expert annotation BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF7. The creators of the ISEAR dataset make use of self-reporting instead, where subjects are asked to describe situations associated with a specific emotion BIBREF25. Crowdsourcing is another popular way to acquire human judgments BIBREF26, BIBREF9, BIBREF9, BIBREF27, BIBREF28. Another recent dataset for emotion recognition reproduces the ISEAR dataset in a crowdsourcing setting for both English and German BIBREF29. Lastly, social network platforms play a central role in data acquisition with distant supervision, because they provide a cheap way to obtain large amounts of noisy data BIBREF26, BIBREF9, BIBREF30, BIBREF31. Table TABREF3 shows an overview of resources. More details could be found in Bostan2018.
<<</Emotion Classification>>>
<<<Emotion Intensity>>>
In emotion intensity prediction, the term intensity refers to the degree an emotion is experienced. For this task, there are only a few datasets available. To our knowledge, the first dataset annotated for emotion intensity is by Aman2007, who ask experts for ratings, followed by the datasets released for the EmoInt shared tasks BIBREF32, BIBREF28, both annotated via crowdsourcing through the best-worst scaling. The annotation task can also be formalized as a classification task, similarly to the emotion classification task, where the goal would be to map some textual input to a class from a set of predefined classes of emotion intensity categories. This approach is used by Aman2007, where they annotate high, moderate, and low.
<<</Emotion Intensity>>>
<<<Cue or Trigger Words>>>
The task of finding a function that segments a textual input and finds the span indicating an emotion category is less researched. Cue or trigger words detection could also be formulated as an emotion classification task for which the set of classes to be predicted is extended to cover other emotion categories with cues. First work that annotated cues was done manually by one expert and three annotators on the domain of blog posts BIBREF21. Mohammad2014 annotates the cues of emotions in a corpus of $4,058$ electoral tweets from US via crowdsourcing. Similar in annotation procedure, Yan2016emocues curate a corpus of 15,553 tweets and annotate it with 28 emotion categories, valence, arousal, and cues.
To the best of our knowledge, there is only one work BIBREF8 that leverages the annotations for cues and considers the task of emotion detection where the exact spans that represent the cues need to be predicted.
<<</Cue or Trigger Words>>>
<<<Emotion Cause Detection>>>
Detecting the cause of an expressed emotion in text received relatively little attention, compared to emotion detection. There are only few works on English that focus on creating resources to tackle this task BIBREF23, BIBREF9, BIBREF8, BIBREF33. The task can be formulated in different ways. One is to define a closed set of potential causes after annotation. Then, cause detection is a classification task BIBREF9. Another setting is to find the cause in the text. This is formulated as segmentation or clause classification BIBREF23, BIBREF8. Finding the cause of an emotion is widely researched on Mandarin in both resource creation and methods. Early works build on rule-based systems BIBREF34, BIBREF35, BIBREF36 which examine correlations between emotions and cause events in terms of linguistic cues. The works that follow up focus on both methods and corpus construction, showing large improvements over the early works BIBREF37, BIBREF38, BIBREF33, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF43, BIBREF11. The most recent work on cause extraction is being done on Mandarin and formulates the task jointly with emotion detection BIBREF10, BIBREF44, BIBREF45. With the exception of Mohammad2014 who is annotating via crowdsourcing, all other datasets are manually labeled, usually by using the W3C Emotion Markup Language.
<<</Emotion Cause Detection>>>
<<<Semantic Role Labeling of Emotions>>>
Semantic role labeling in the context of emotion analysis deals with extracting who feels (experiencer) which emotion (cue, class), towards whom the emotion is expressed (target), and what is the event that caused the emotion (stimulus). The relations are defined akin to FrameNet's Emotion frame BIBREF19.
There are two works that work on annotation of semantic roles in the context of emotion. Firstly, Mohammad2014 annotate a dataset of $4,058$ tweets via crowdsourcing. The tweets were published before the U.S. presidential elections in 2012. The semantic roles considered are the experiencer, the stimulus, and the target. However, in the case of tweets, the experiencer is mostly the author of the tweet. Secondly, Kim2018 annotate and release REMAN (Relational EMotion ANnotation), a corpus of $1,720$ paragraphs based on Project Gutenberg. REMAN was manually annotated for spans which correspond to emotion cues and entities/events in the roles of experiencers, targets, and causes of the emotion. They also provide baseline results for the automatic prediction of these structures and show that their models benefit from joint modeling of emotions with its roles in all subtasks. Our work follows in motivation Kim2018 and in procedure Mohammad2014.
<<</Semantic Role Labeling of Emotions>>>
<<<Reader vs. Writer vs. Text Perspective>>>
Studying the impact of different annotation perspectives is another little explored area. There are few exceptions in sentiment analysis which investigate the relation between sentiment of a blog post and the sentiment of their comments BIBREF46 or model the emotion of a news reader jointly with the emotion of a comment writer BIBREF47.
Fewer works exist in the context of emotion analysis. 5286061 deal with writer's and reader's emotions on online blogs and find that positive reader emotions tend to be linked to positive writer emotions. Buechel2017b and buechel-hahn-2017-emobank look into the effects of different perspectives on annotation quality and find that the reader perspective yields better inter-annotator agreement values.
<<</Reader vs. Writer vs. Text Perspective>>>
<<</Related Work>>>
<<<Data Collection & Annotation>>>
We gather the data in three steps: (1) collecting the news and the reactions they elicit in social media, (2) filtering the resulting set to retain relevant items, and (3) sampling the final selection using various metrics.
The headlines are then annotated via crowdsourcing in two phases by three annotators in the first phase and by five annotators in the second phase. As a last step, the annotations are adjudicated to form the gold standard. We describe each step in detail below.
<<<Collecting Headlines>>>
The first step consists of retrieving news headlines from the news publishers. We further retrieve content related to a news item from social media: tweets mentioning the headlines together with replies and Reddit posts that link to the headlines. We use this additional information for subsampling described later.
We manually select all news sources available as RSS feeds (82 out of 124) from the Media Bias Chart BIBREF48, a project that analyzes reliability (from original fact reporting to containing inaccurate/fabricated information) and political bias (from most extreme left to most extreme right) of U.S. news sources.
Our news crawler retrieved daily headlines from the feeds, together with the attached metadata (title, link, and summary of the news article) from March 2019 until October 2019. Every day, after the news collection finished, Twitter was queried for 50 valid tweets for each headline. In addition to that, for each collected tweet, we collect all valid replies and counts of being favorited, retweeted and replied to in the first 24 hours after its publication.
The last step in the pipeline is aquiring the top (“hot”) submissions in the /r/news, /r/worldnews subreddits, and their metadata, including the number of up and downvotes, upvote ratio, number of comments, and comments themselves.
<<</Collecting Headlines>>>
<<<Filtering & Postprocessing>>>
We remove any headlines that have less than 6 tokens (e. g., “Small or nothing”, “But Her Emails”, “Red for Higher Ed”), as well as those starting with certain phrases, such as “Ep.”,“Watch Live:”, “Playlist:”, “Guide to”, and “Ten Things”. We also filter-out headlines that contain a date (e. g., “Headlines for March 15, 2019”) and words from the headlines which refer to visual content, like “video”, “photo”, “image”, “graphic”, “watch”, etc.
<<</Filtering & Postprocessing>>>
<<<Sampling Headlines>>>
We stratify the remaining headlines by source (150 headlines from each source) and subsample equally according to the following strategies: 1) randomly select headlines, 2) select headlines with high count of emotion terms, 3) select headlines that contain named entities, and 4) select the headlines with high impact on social media. Table TABREF16 shows how many headlines are selected by each sampling method in relation to the most dominant emotion (see Section SECREF25).
<<<Random Sampling.>>>
The goal of the first sampling method is to collect a random sample of headlines that is representative and not biased towards any source or content type. Note that the sample produced using this strategy might not be as rich with emotional content as the other samples.
<<</Random Sampling.>>>
<<<Sampling via NRC.>>>
For the second sampling strategy we hypothesize that headlines containing emotionally charged words are also likely to contain the structures we aim to annotate. This strategy selects headlines whose words are in the NRC dictionary BIBREF49.
<<</Sampling via NRC.>>>
<<<Sampling Entities.>>>
We further hypothesize that headlines that mention named entities may also contain experiencers or targets of emotions, and therefore, they are likely to present a complete emotion structure. This sampling method yields headlines that contain at least one entity name, according to the recognition from spaCy that is trained on OntoNotes 5 and on Wikipedia corpus. We consider organization names, persons, nationalities, religious, political groups, buildings, countries, and other locations.
<<</Sampling Entities.>>>
<<<Sampling based on Reddit & Twitter.>>>
The last sampling strategy involves our Twitter and Reddit metadata. This enables us to select and sample headlines based on their impact on social media (under the assumption that this correlates with emotion connotation of the headline). This strategy chooses them equally from the most favorited tweets, most retweeted headlines on Twitter, most replied to tweets on Twitter, as well as most upvoted and most commented on posts on Reddit.
<<</Sampling based on Reddit & Twitter.>>>
<<</Sampling Headlines>>>
<<<Annotation Procedure>>>
Using these sampling and filtering methods, we select $9,932$ headlines. Next, we set up two questionnaires (see Table TABREF17) for the two annotation phases that we describe below. We use Figure Eight.
<<<Phase 1: Selecting Emotional Headlines>>>
The first questionnaire is meant to determine the dominant emotion of a headline, if that exists, and whether the headline triggers an emotion in a reader. We hypothesize that these two questions help us to retain only relevant headlines for the next, more expensive, annotation phase.
During this phase, $9,932$ headlines were annotated by three annotators. The first question of the first phase (P1Q1) is: “Which emotion is most dominant in the given headline?” and annotators are provided a closed list of 15 emotion categories to which the category No emotion was added. The second question (P1Q2) aims to answer whether a given headline would stir up an emotion in most readers and the annotators are provided with only two possible answers (yes or no, see Table TABREF17 and Figure FIGREF1 for details).
Our set of 15 emotion categories is an extended set over Plutchik's emotion classes and comprises anger, annoyance, disgust, fear, guilt, joy, love, pessimism, negative surprise, optimism, positive surprise, pride, sadness, shame, and trust. Such a diverse set of emotion labels is meant to provide a more fine-grained analysis and equip the annotators with a wider range of answer choices.
<<</Phase 1: Selecting Emotional Headlines>>>
<<<Phase 2: Emotion and Role Annotation>>>
The annotations collected during the first phase are automatically ranked and the ranking is used to decide which headlines are further annotated in the second phase. Ranking consists of sorting by agreement on P1Q1, considering P1Q2 in the case of ties.
The top $5,000$ ranked headlines are annotated by five annotators for emotion class, intensity, reader emotion, and other emotions in case there is not only a dominant emotion. Along with these closed annotation tasks, the annotators are asked to answer several open questions, namely (1) who is the experiencer of the emotion (if mentioned), (2) what event triggered the annotated emotion (if mentioned), (3) if the emotion had a target, and (4) who or what is the target. The annotators are free to select multiple instances related to the dominant emotion by copy-paste into the answer field. For more details on the exact questions and example of answers, see Table TABREF17. Figure FIGREF1 shows a depiction of the procedure.
<<</Phase 2: Emotion and Role Annotation>>>
<<<Quality Control and Results>>>
To control the quality, we ensured that a single annotator annotates maximum 120 headlines (this protects the annotators from reading too many news headlines and from dominating the annotations). Secondly, we let only annotators who geographically reside in the U.S. contribute to the task.
We test the annotators on a set of $1,100$ test questions for the first phase (about 10% of the data) and 500 for the second phase. Annotators were required to pass 95%. The questions were generated based on hand-picked non-ambiguous real headlines through swapping out relevant words from the headline in order to obtain a different annotation, for instance, for “Djokovic happy to carry on cruising”, we would swap “Djokovic” with a different entity, the cue “happy” to a different emotion expression.
Further, we exclude Phase 1 annotations that were done in less than 10 seconds and Phase 2 annotations that were done in less than 70 seconds.
After we collected all annotations, we found unreliable annotators for both phases in the following way: for each annotator and for each question, we compute the probability with which the annotator agrees with the response chosen by the majority. If the computed probability is more than two standard deviations away from the mean we discard all annotations done by that annotator.
On average, 310 distinct annotators needed 15 seconds in the first phase. We followed the guidelines of the platform regarding payment and decided to pay for each judgment $$0.02$ (USD) for Phase 1 (total of $$816.00$ USD). For the second phase, 331 distinct annotators needed on average $\approx $1:17 minutes to perform one judgment. Each judgment was paid with $0.08$$ USD (total $$2,720.00$ USD).
<<</Quality Control and Results>>>
<<</Annotation Procedure>>>
<<<Adjudication of Annotations>>>
In this section, we describe the adjudication process we undertook to create the gold dataset and the difficulties we faced in creating a gold set out of the collected annotations.
The first step was to discard obviously wrong annotations for open questions, such as annotations in other languages than English, or annotations of spans that were not part of the headline. In the next step, we incrementally apply a set of rules to the annotated instances in a one-or-nothing fashion. Specifically, we incrementally test each instance for a number of criteria in such a way that if at least one criteria is satisfied the instance is accepted and its adjudication is finalized. Instances that do not satisfy at least one criterium are adjudicated manually.
<<<Relative Majority Rule.>>>
This filter is applied to all questions regardless of their type. Effectively, whenever an entire annotation is agreed upon by at least two annotators, we use all parts of this annotation as the gold annotation. Given the headline depicted in Figure FIGREF1 with the following target role annotations by different annotators: “A couple”, “None”, “A couple”, “officials”, “their helicopter”. The resulting gold annotation is “A couple” and the adjudication process for the target ends.
<<</Relative Majority Rule.>>>
<<<Most Common Subsequence Rule.>>>
This rule is only applied to open text questions. It takes the most common smallest string intersection of all annotations. In the headline above, the experiencer annotations “A couple”, “infuriated officials”, “officials”, “officials”, “infuriated officials” would lead to “officials”.
<<</Most Common Subsequence Rule.>>>
<<<Longest Common Subsequence Rule.>>>
This rule is only applied two different intersections are the most common (previous rule), and these two intersect. We then accept the longest common subsequence. Revisiting the example for deciding on the cause role with the annotations “by landing their helicopter in the nature reserve”, “by landing their helicopter”, “landing their helicopter in the nature reserve”, “a couple infuriated officials”, “infuriated” the adjudicated gold is “landing their helicopter in the nature reserve”.
Table TABREF27 shows through examples of how each rule works and how many instances are “solved” by each adjudication rule.
<<</Longest Common Subsequence Rule.>>>
<<<Noun Chunks>>>
For the role of experiencer, we accept only the most-common noun-chunk(s).
The annotations that are left after being processed by all the rules described above are being adjudicated manually by the authors of the paper. We show examples for all roles in Table TABREF29.
<<</Noun Chunks>>>
<<</Adjudication of Annotations>>>
<<</Data Collection & Annotation>>>
<<<Analysis>>>
<<<Inter-Annotator Agreement>>>
We calculate the agreement on the full set of annotations from each phase for the two question types, namely open vs. closed, where the first deal with emotion classification and second with the roles cue, experiencer, cause, and target.
<<<Emotion>>>
We use Fleiss' Kappa ($\kappa $) to measure the inter-annotator agreement for closed questions BIBREF50, BIBREF51. In addition, we report the average percentage of overlaps between all pairs of annotators (%) and the mean entropy of annotations in bits. Higher agreement correlates with lower entropy. As Table TABREF38 shows, the agreement on the question whether a headline is emotional or not obtains the highest agreement ($0.34$), followed by the question on intensity ($0.22$). The lowest agreement is on the question to find the most dominant emotion ($0.09$).
All metrics show comparably low agreement on the closed questions, especially on the question of the most dominant emotion. This is reasonable, given that emotion annotation is an ambiguous, subjective, and difficult task. This aspect lead to the decision of not purely calculating a majority vote label but to consider the diversity in human interpretation of emotion categories and publish the annotations by all annotators.
Table TABREF40 shows the counts of annotators agreeing on a particular emotion. We observe that Love, Pride, and Sadness show highest intersubjectivity followed closely by Fear and Joy. Anger and Annoyance show, given their similarity, lower scores. Note that the micro average of the basic emotions (+ love) is $0.21$ for when more than five annotators agree.
<<</Emotion>>>
<<<Roles>>>
Table TABREF41 presents the mean of pair-wise inter-annotator agreement for each role. We report average pair-wise Fleiss' $\kappa $, span-based exact $\textrm {F}_1$ over the annotated spans, accuracy, proportional token overlap, and the measure of agreement on set-valued items, MASI BIBREF52.
We observe a fair agreement on the open annotation tasks. The highest agreement is for the role of the Experiencer, followed by Cue, Cause, and Target.
This seems to correlate with the length of the annotated spans (see Table TABREF42). This finding is consistent with Kim2018. Presumably, Experiencers are easier to annotate as they often are noun phrases whereas causes can be convoluted relative clauses.
<<</Roles>>>
<<</Inter-Annotator Agreement>>>
<<<General Corpus Statistics>>>
In the following, we report numbers of the adjudicated data set for simplicity of discussion. Please note that we publish all annotations by all annotators and suggest that computational models should consider the distribution of annotations instead of one adjudicated gold. The latter for be a simplification which we consider to not be appropriate.
GoodNewsEveryone contains $5,000$ headlines from various news sources described in the Media Bias Chart BIBREF17. Overall, the corpus is composed of $56,612$ words ($354,173$ characters) out of which $17,513$ are unique. The headline length is short with 11 words on average. The shortest headline contains 6 words while the longest headline contains 32 words. The length of a headline in characters ranges from 24 the shortest to 199 the longest.
Table TABREF42 presents the total number of adjudicated annotations for each role in relation to the dominant emotion. GoodNewsEveryone consists of $5,000$ headlines, $3,312$ of which have annotated dominant emotion via majority vote. The rest of $1,688$ headlines (up to $5,000$) ended in ties for the most dominant emotion category and were adjudicated manually. The emotion category Negative Surprise has the highest number of annotations, while Love has the lowest number of annotations. In most cases, Cues are single tokens (e. g., “infuriates”, “slams”), Cause has the largest proportion of annotations that span more than seven tokens on average (65% out of all annotations in this category),
For the role of Experiencer, we see the lowest number of annotations (19%), which is a very different result to the one presented by Kim2018, where the role Experiencer was the most annotated. We hypothesize that this is the effect of the domain we annotated; it is more likely to encounter explicit experiencers in literature (as literary characters) than in news headlines. As we can see, the cue and the cause relations dominate the dataset (27% each), followed by Target (25%) relations.
Table TABREF42 also shows how many times each emotion triggered a certain relation. In this sense, Negative Surprise and Positive Surprise has triggered the most Experiencer, and Cause and Target relations, which due to the prevalence of the annotations for this emotion in the dataset.
Further, Figure FIGREF44, shows the distances of the different roles from the cue. The causes and targets are predominantly realized right of the cue, while the experiencer occurs more often left of the cue.
<<</General Corpus Statistics>>>
<<</Analysis>>>
<<<Baseline>>>
As an estimate for the difficulty of the task, we provide baseline results. We formulate the task as sequence labeling of emotion cues, mentions of experiencers, targets, and causes with a bidirectional long short-term memory networks with a CRF layer (biLSTM-CRF) that uses Elmo embeddings as input and an IOB alphabet as output. The results are shown in Table TABREF45.
<<</Baseline>>>
<<<Conclusion & Future Work>>>
We introduce GoodNewsEveryone, a corpus of $5,000$ headlines annotated for emotion categories, semantic roles, and reader perspective. Such a dataset enables answering instance-based questions, such as, “who is experiencing what emotion and why?” or more general questions, like “what are typical causes of joy in media?”. To annotate the headlines, we employ a two-phase procedure and use crowdsourcing. To obtain a gold dataset, we aggregate the annotations through automatic heuristics.
As the evaluation of the inter-annotator agreement and the baseline model results show, the task of annotating structures encompassing emotions with the corresponding roles is a very difficult one.
However, we also note that developing such a resource via crowdsourcing has its limitations, due to the subjective nature of emotions, it is very challenging to come up with an annotation methodology that would ensure less dissenting annotations for the domain of headlines.
We release the raw dataset, the aggregated gold dataset, the carefully designed questionnaires, and baseline models as a freely available repository (partially only after acceptance of the paper). The released dataset will be useful for social science scholars, since it contains valuable information about the interactions of emotions in news headlines, and gives interesting insights into the language of emotion expression in media. Note that this dataset is also useful since it introduces a new dataset to test on structured prediction models. We are currently investigating the dataset for understanding the interaction between media bias and annotated emotions and roles.
<<</Conclusion & Future Work>>>
<<</Title>>>
|
{
"references": [
"Related Work, Introduction"
],
"type": "disordered_section"
}
|
1908.05969
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Simplify the Usage of Lexicon in Chinese NER
<<<Abstract>>>
Recently, many works have tried to utilizing word lexicon to augment the performance of Chinese named entity recognition (NER). As a representative work in this line, Lattice-LSTM \cite{zhang2018chinese} has achieved new state-of-the-art performance on several benchmark Chinese NER datasets. However, Lattice-LSTM suffers from a complicated model architecture, resulting in low computational efficiency. This will heavily limit its application in many industrial areas, which require real-time NER response. In this work, we ask the question: if we can simplify the usage of lexicon and, at the same time, achieve comparative performance with Lattice-LSTM for Chinese NER? ::: Started with this question and motivated by the idea of Lattice-LSTM, we propose a concise but effective method to incorporate the lexicon information into the vector representations of characters. This way, our method can avoid introducing a complicated sequence modeling architecture to model the lexicon information. Instead, it only needs to subtly adjust the character representation layer of the neural sequence model. Experimental study on four benchmark Chinese NER datasets shows that our method can achieve much faster inference speed, comparative or better performance over Lattice-LSTM and its follwees. It also shows that our method can be easily transferred across difference neural architectures.
<<</Abstract>>>
<<<Introduction>>>
Named Entity Recognition (NER) is concerned with identifying named entities, such as person, location, product, and organization names, in unstructured text. In languages where words are naturally separated (e.g., English), NER was conventionally formulated as a sequence labeling problem, and the state-of-the-art results have been achieved by those neural-network-based models BIBREF1, BIBREF2, BIBREF3, BIBREF4.
Compared with NER in English, Chinese NER is more difficult since sentences in Chinese are not previously segmented. Thus, one common practice in Chinese NER is first performing word segmentation using an existing CWS system and then applying a word-level sequence labeling model to the segmented sentence BIBREF5, BIBREF6. However, it is inevitable that the CWS system will wrongly segment the query sequence. This will, in turn, result in entity boundary detection errors and even entity category prediction errors in the following NER. Take the character sequence “南京市 (Nanjing) / 长江大桥 (Yangtze River Bridge)" as an example, where “/" indicates the gold segmentation result. If the sequence is segmented into “南京 (Nanjing) / 市长 (mayor) / 江大桥 (Daqiao Jiang)", the word-based NER system is definitely not able to correctly recognize “南京市 (Nanjing)" and “长江大桥 (Yangtze River Bridge)" as two entities of the location type. Instead, it is possible to incorrectly treat “南京 (Nanjing)" as a location entity and predict “江大桥 (Daqiao Jiang)" to be a person's name. Therefore, some works resort to performing Chinese NER directly on the character level, and it has been shown that this practice can achieve better performance BIBREF7, BIBREF8, BIBREF9, BIBREF0.
A drawback of the purely character-based NER method is that word information, which has been proved to be useful, is not fully exploited. With this consideration, BIBREF0 proposed to incorporating word lexicon into the character-based NER model. In addition, instead of heuristically choosing a word for the character if it matches multiple words of the lexicon, they proposed to preserving all matched words of the character, leaving the following NER model to determine which matched word to apply. To achieve this, they introduced an elaborate modification to the LSTM-based sequence modeling layer of the LSTM-CRF model BIBREF1 to jointly model the character sequence and all of its matched words. Experimental studies on four public Chinese NER datasets show that Lattice-LSTM can achieve comparative or better performance on Chinese NER over existing methods.
Although successful, there exists a big problem in Lattice-LSTM that limits its application in many industrial areas, where real-time NER responses are needed. That is, its model architecture is quite complicated. This slows down its inference speed and makes it difficult to perform training and inference in parallel. In addition, it is far from easy to transfer the structure of Lattice-LSTM to other neural-network architectures (e.g., convolutional neural networks and transformers), which may be more suitable for some specific datasets.
In this work, we aim to find a easier way to achieve the idea of Lattice-LSTM, i.e., incorporating all matched words of the sentence to the character-based NER model. The first principle of our method design is to achieve a fast inference speed. To this end, we propose to encoding the matched words, obtained from the lexicon, into the representations of characters. Compared with Lattice-LSTM, this method is more concise and easier to implement. It can avoid complicated model architecture design thus has much faster inference speed. It can also be quickly adapted to any appropriate neural architectures without redesign. Given an existing neural character-based NER model, we only have to modify its character representation layer to successfully introduce the word lexicon. In addition, experimental studies on four public Chinese NER datasets show that our method can even achieve better performance than Lattice-LSTM when applying the LSTM-CRF model. Our source code is published at https://github.com/v-mipeng/LexiconAugmentedNER.
<<</Introduction>>>
<<<Generic Character-based Neural Architecture for Chinese NER>>>
In this section, we provide a concise description of the generic character-based neural NER model, which conceptually contains three stacked layers. The first layer is the character representation layer, which maps each character of a sentence into a dense vector. The second layer is the sequence modeling layer. It plays the role of modeling the dependence between characters, obtaining a hidden representation for each character. The final layer is the label inference layer. It takes the hidden representation sequence as input and outputs the predicted label (with probability) for each character. We detail these three layers below.
<<<Character Representation Layer>>>
For a character-based Chinese NER model, the smallest unit of a sentence is a character and the sentence is seen as a character sequence $s=\lbrace c_1, \cdots , c_n\rbrace \in \mathcal {V}_c$, where $\mathcal {V}_c$ is the character vocabulary. Each character $c_i$ is represented using a dense vector (embedding):
where $\mathbf {e}^{c}$ denotes the character embedding lookup table.
<<<Char + bichar.>>>
In addition, BIBREF0 has proved that character bigrams are useful for representing characters, especially for those methods not use word information. Therefore, it is common to augment the character representation with bigram information by concatenating bigram embeddings with character embeddings:
where $\mathbf {e}^{b}$ denotes the bigram embedding lookup table, and $\oplus $ denotes the concatenation operation. The sequence of character representations $\mathbf {\mathrm {x}}_i^c$ form the matrix representation $\mathbf {\mathrm {x}}^s=\lbrace \mathbf {\mathrm {x}}_1^c, \cdots , \mathbf {\mathrm {x}}_n^c\rbrace $ of $s$.
<<</Char + bichar.>>>
<<</Character Representation Layer>>>
<<<Sequence Modeling Layer>>>
The sequence modeling layer models the dependency between characters built on vector representations of the characters. In this work, we explore the applicability of our method to three popular architectures of this layer: the LSTM-based, the CNN-based, and the transformer-based.
<<<LSTM-based>>>
The bidirectional long-short term memory network (BiLSTM) is one of the most commonly used architectures for sequence modeling BIBREF10, BIBREF3, BIBREF11. It contains two LSTM BIBREF12 cells that model the sequence in the left-to-right (forward) and right-to-left (backward) directions with two distinct sets of parameters. Here, we precisely show the definition of the forward LSTM:
where $\sigma $ is the element-wise sigmoid function and $\odot $ represents element-wise product. $\mathbf {\mathrm {\mathrm {W}}} \in {\mathbf {\mathrm {\mathbb {R}}}^{4k_h\times (k_h+k_w)}}$ and $\mathbf {\mathrm {\mathrm {b}}}\in {\mathbf {\mathrm {\mathbb {R}}}^{4k_h}}$ are trainable parameters. The backward LSTM shares the same definition as the forward one but in an inverse sequence order. The concatenated hidden states at the $i^{th}$ step of the forward and backward LSTMs $\mathbf {\mathrm {h}}_i=[\overrightarrow{\mathbf {\mathrm {h}}}_i \oplus \overleftarrow{\mathbf {\mathrm {h}}}_i]$ forms the context-dependent representation of $c_i$.
<<</LSTM-based>>>
<<<CNN-based>>>
Another popular architecture for sequence modeling is the convolution network BIBREF13, which has been proved BIBREF14 to be effective for Chinese NER. In this work, we apply a convolutional layer to model trigrams of the character sequence and gradually model its multigrams by stacking multiple convolutional layers. Specifically, let $\mathbf {\mathrm {h}}^l_i$ denote the hidden representation of $c_i$ in the $l^{th}$ layer with $\mathbf {\mathrm {h}}_i^0=\mathbf {\mathrm {x}}^c_i$, and $\mathbf {\mathrm {F}}^l \in \mathbb {R}^{k_l \times k_c \times 3}$ denote the corresponding filter used in this layer. To obtain the hidden representation $\mathbf {\mathrm {h}}^{l+1}_i$ of $c_i$ in the $(l+1)^{th}$ layer, it takes the convolution of $\mathbf {\mathrm {F}}^l$ over the 3-gram representation:
where $\mathbf {\mathrm {h}}^l_{<i-1, i+1>} = [\mathbf {\mathrm {h}}^l_{i-1}; \mathbf {\mathrm {h}}^l_{i}; \mathbf {\mathrm {h}}^l_{i+1}]$ and $\langle A,B \rangle _i=\mbox{Tr}(AB[i, :, :]^T)$. This operation applies $L$ times, obtaining the final context-dependent representation, $\mathbf {\mathrm {h}}_i = \mathbf {\mathrm {h}}_i^L$, of $c_i$.
<<</CNN-based>>>
<<<Transformer-based>>>
Transformer BIBREF15 is originally proposed for sequence transduction, on which it has shown several advantages over the recurrent or convolutional neural networks. Intrinsically, it can also be applied to the sequence labeling task using only its encoder part.
In similar, let $\mathbf {\mathrm {h}}^l_i$ denote the hidden representation of $c_i$ in the $l^{th}$ layer with $\mathbf {\mathrm {h}}_i^0=\mathbf {\mathrm {x}}^c_i$, and $f^l$ denote a feedforward module used in this layer. To obtain the hidden representation matrix $\mathbf {\mathrm {h}}^{l+1}$ of $s$ in the $(l+1)^{th}$ layer, it takes the self-attention of $\mathbf {\mathrm {h}}^l$:
where $d^l$ is the dimension of $\mathbf {\mathrm {h}}^l_i$. This process applies $L$ times, obtaining $\mathbf {\mathrm {h}}^L$. After that, the position information of each character $c_i$ is introduced into $\mathbf {\mathrm {h}}^L_i$ to obtain its final context-dependent representation $\mathbf {\mathrm {h}}_i$:
where $PE_i=sin(i/1000^{2j/d^L}+j\%2\cdot \pi /2)$. We recommend you to refer to the excellent guides “The Annotated Transformer.” for more implementation detail of this architecture.
<<</Transformer-based>>>
<<</Sequence Modeling Layer>>>
<<<Label Inference Layer>>>
On top of the sequence modeling layer, a sequential conditional random field (CRF) BIBREF16 layer is applied to perform label inference for the character sequence as a whole:
where $\mathcal {Y}_s$ denotes all possible label sequences of $s$, $\phi _{t}({y}^\prime , {y}|\mathbf {\mathrm {s}})=\exp (\mathbf {w}^T_{{y}^\prime , {y}} \mathbf {\mathrm {h}}_t + b_{{y}^\prime , {y}})$, where $\mathbf {w}_{{y}^\prime , {y}}$ and $ b_{{y}^\prime , {y}}$ are trainable parameters corresponding to the label pair $({y}^\prime , {y})$, and $\mathbf {\theta }$ denotes model parameters. For label inference, it searches for the label sequence $\mathbf {\mathrm {y}}^{*}$ with the highest conditional probability given the input sequence ${s}$:
which can be efficiently solved using the Viterbi algorithm BIBREF17.
<<</Label Inference Layer>>>
<<</Generic Character-based Neural Architecture for Chinese NER>>>
<<<Lattice-LSTM for Chinese NER>>>
Lattice-LSTM designs to incorporate word lexicon into the character-based neural sequence labeling model. To achieve this purpose, it first performs lexicon matching on the input sentence. It will add an directed edge from $c_i$ to $c_j$, if the sub-sequence $\lbrace c_i, \cdots , c_j\rbrace $ of the sentence matches a word of the lexicon for $i < j$. And it preserves all lexicon matching results on a character by allowing the character to connect with multiple characters. Concretely, for a sentence $\lbrace c_1, c_2, c_3, c_4, c_5\rbrace $, if both its sub-sequences $\lbrace c_1, c_2, c_3, c_4\rbrace $ and $\lbrace c_2, c_3, c_4\rbrace $ match a word of the lexicon, it will add a directed edge from $c_1$ to $c_4$ and a directed edge from $c_2$ to $c_4$. This practice will turn the input form of the sentence from a chained sequence into a graph.
To model the graph-based input, Lattice-LSTM accordingly modifies the LSTM-based sequence modeling layer. Specifically, let $s_{<*, j>}$ denote the list of sub-sequences of a sentence $s$ that match the lexicon and end with $c_j$, $\mathbf {\mathrm {h}}_{<*, j>}$ denote the corresponding hidden state list $\lbrace \mathbf {\mathrm {h}}_i, \forall s_{<i, j>} \in s_{<*, j>}\rbrace $, and $\mathbf {\mathrm {c}}_{<*, j>}$ denote the corresponding memory cell list $\lbrace \mathbf {\mathrm {c}}_i, \forall s_{<i, j>} \in s_{<*, j>}\rbrace $. In Lattice-LSTM, the hidden state $\mathbf {\mathrm {h}}_j$ and memory cell $\mathbf {\mathrm {c}}_j$ of $c_j$ are now updated by:
where $f$ is a simplified representation of the function used by Lattice-LSTM to perform memory update. Note that, in the updating process, the inputs now contains current step character representation $\mathbf {\mathrm {x}}_j^c$, last step hidden state $\mathbf {\mathrm {h}}_{j-1}$ and memory cell $\mathbf {\mathrm {c}}_{j-1}$, and lexicon matched sub-sequences $s_{<*, j>}$ and their corresponding hidden state and memory cell lists, $\mathbf {\mathrm {h}}_{<*, j>}$ and $\mathbf {\mathrm {c}}_{<*, j>}$. We refer you to the paper of Lattice-LSTM BIBREF0 for more detail of the implementation of $f$.
A problem of Lattice-LSTM is that its speed of sequence modeling is much slower than the normal LSTM architecture since it has to additionally model $s_{<*, j>}$, $\mathbf {\mathrm {h}}_{<*, j>}$, and $\mathbf {\mathrm {c}}_{<*, j>}$ for memory update. In addition, considering the implementation of $f$, it is hard for Lattice-LSTM to process multiple sentences in parallel (in the published implementation of Lattice-LSTM, the batch size was set to 1). This raises the necessity to design a simpler way to achieve the function of Lattice-LSTM for incorporating the word lexicon into the character-based NER model.
<<</Lattice-LSTM for Chinese NER>>>
<<<Proposed Method>>>
In this section, we introduce our method, which aims to keep the merit of Lattice-LSTM and at the same time, make the computation efficient. We will start the description of our method from our thinking on Lattice-LSTM.
From our view, the advance of Lattice-LSTM comes from two points. The first point is that it preserve all possible matching words for each character. This can avoid the error propagation introduced by heuristically choosing a matching result of the character to the NER system. The second point is that it can introduce pre-trained word embeddings to the system, which bring great help to the final performance. While the disadvantage of Lattice-LSTM is that it turns the input form of a sentence from a chained sequence into a graph. This will greatly increase the computational cost for sentence modeling. Therefore, the design of our method should try to keep the chained input form of the sentence and at the same time, achieve the above two advanced points of Lattice-LSTM.
With this in mind, our method design was firstly motivated by the Softword technique, which was originally used for incorporating word segmentation information into downstream tasks BIBREF18, BIBREF19. Precisely, the Softword technique augments the representation of a character with the embedding of its corresponding segmentation label:
Here, $seg(c_j) \in \mathcal {Y}_{seg}$ denotes the segmentation label of the character $c_j$ predicted by the word segmentor, $\mathbf {e}^{seg}$ denotes the segmentation label embedding lookup table, and commonly $\mathcal {Y}_{seg}=\lbrace \text{B}, \text{M}, \text{E}, \text{S}\rbrace $ with B, M, E indicating that the character is the beginning, middle, and end of a word, respectively, and S indicating that the character itself forms a single-character word.
The first idea we come out based on the Softword technique is to construct a word segmenter using the lexicon and allow a character to have multiple segmentation labels. Take the sentence $s=\lbrace c_1, c_2, c_3, c_4, c_5\rbrace $ as an example. If both its sub-sequences $\lbrace c_1, c_2, c_3, c_4\rbrace $ and $\lbrace c_3, c_4\rbrace $ match a word of the lexicon, then the segmentation label sequence of $s$ using the lexicon is $segs(s)=\lbrace \lbrace \text{B}\rbrace , \lbrace \text{M}\rbrace , \lbrace \text{B}, \text{M}\rbrace , \lbrace \text{E}\rbrace , \lbrace \text{O}\rbrace \rbrace $. Here, $segs(s)_1=\lbrace \text{B}\rbrace $ indicates that there is at least one sub-sequence of $s$ matching a word of the lexicon and beginning with $c_1$, $segs(s)_3=\lbrace \text{B}, \text{M}\rbrace $ means that there is at least one sub-sequence of $s$ matching the lexicon and beginning with $c_3$ and there is also at least one lexicon matched sub-sequence in the middle of which $c_3$ occurs, and $segs(s)_5=\lbrace \text{O}\rbrace $ means that there is no sub-sequence of $s$ that matches the lexicon and contains $c_5$. The character representation is then obtained by:
where $\mathbf {e}^{seg}(segs(s)_j)$ is a 5-dimensional binary vector with each dimension corresponding to an item of $\lbrace \text{B, M, E, S, O\rbrace }$. We call this method as ExSoftword in the following.
However, through the analysis of ExSoftword, we can find out that the ExSoftword method cannot fully inherit the two merits of Lattice-LSTM. Firstly, it cannot not introduce pre-trained word embeddings. Secondly, though it tries to keep all the lexicon matching results by allowing a character to have multiple segmentation labels, it still loses lots of information. In many cases, we cannot restore the matching results from the segmentation label sequence. Consider the case that in the sentence $s=\lbrace c_1, c_2, c_3, c_4\rbrace $, $\lbrace c_1, c_2, c_3\rbrace $ and $\lbrace c_2, c_3, c_4\rbrace $ match the lexicon. In this case, $segs(s) = \lbrace \lbrace \text{B}\rbrace , \lbrace \text{B}, \text{M}\rbrace , \lbrace \text{M}, \text{E}\rbrace , \lbrace \text{E}\rbrace \rbrace $. However, based on $segs(s)$ and $s$, we cannot say that it is $\lbrace c_1, c_2, c_3\rbrace $ and $\lbrace c_2, c_3, c_4\rbrace $ matching the lexicon since we will obtain the same segmentation label sequence when $\lbrace c_1, c_2, c_3, c_4\rbrace $ and $\lbrace c_2,c_3\rbrace $ match the lexicon.
To this end, we propose to preserving not only the possible segmentation labels of a character but also their corresponding matched words. Specifically, in this improved method, each character $c$ of a sentence $s$ corresponds to four word sets marked by the four segmentation labels “BMES". The word set $\rm {B}(c)$ consists of all lexicon matched words on $s$ that begin with $c$. Similarly, $\rm {M}(c)$ consists of all lexicon matched words in the middle of which $c$ occurs, $\rm {E}(c)$ consists of all lexicon matched words that end with $c$, and $\rm {S}(c)$ is the single-character word comprised of $c$. And if a word set is empty, we will add a special word “NONE" to it to indicate this situation. Consider the sentence $s=\lbrace c_1, \cdots , c_5\rbrace $ and suppose that $\lbrace c_1, c_2\rbrace $, $\lbrace c_1, c_2, c_3\rbrace $, $\lbrace c_2, c_3, c_4\rbrace $, and $\lbrace c_2, c_3, c_4, c_5\rbrace $ match the lexicon. Then, for $c_2$, $\rm {B}(c_2)=\lbrace \lbrace c_2, c_3, c_4\rbrace , \lbrace c_2, c_3, c_4, c_5\rbrace \rbrace $, $\rm {M}(c_2)=\lbrace \lbrace c_1, c_2, c_3\rbrace \rbrace $, $\rm {E}(c_2)=\lbrace \lbrace c_1, c_2\rbrace \rbrace $, and $\rm {S}(c_2)=\lbrace NONE\rbrace $. In this way, we can now introduce the pre-trained word embeddings and moreover, we can exactly restore the matching results from the word sets of each character.
The next step of the improved method is to condense the four word sets of each character into a fixed-dimensional vector. In order to retain information as much as possible, we choose to concatenate the representations of the four word sets to represent them as a whole and add it to the character representation:
Here, $\mathbf {v}^s$ denotes the function that maps a single word set to a dense vector.
This also means that we should map each word set into a fixed-dimensional vector. To achieve this purpose, we first tried the mean-pooling algorithm to get the vector representation of a word set $\mathcal {S}$:
Here, $\mathbf {e}^w$ denotes the word embedding lookup table. However, the empirical studies, as depicted in Table TABREF31, show that this algorithm performs not so well . Through the comparison with Lattice-LSTM, we find out that in Lattice-LSTM, it applies a dynamic attention algorithm to weigh each matched word related to a single character. Motivated by this practice, we propose to weighing the representation of each word in the word set to get the pooling representation of the word set. However, considering the computational efficiency, we do not want to apply a dynamical weighing algorithm, like attention, to get the weight of each word. With this in mind, we propose to using the frequency of the word as an indication of its weight. The basic idea beneath this algorithm is that the more times a character sequence occurs in the data, the more likely it is a word. Note that, the frequency of a word is a static value and can be obtained offline. This can greatly accelerate the calculation of the weight of each word (e.g., using a lookup table).
Specifically, let $w_c$ denote the character sequence constituting $w$ and $z(w)$ denote the frequency of $w_c$ occurring in the statistic data set (in this work, we combine training and testing data of a task to construct the statistic data set. Of course, if we have unlabelled data for the task, we can take the unlabeled data as the statistic data set). Note that, we do not add the frequency of $w$ if $w_c$ is covered by that of another word of the lexicon in the sentence. For example, suppose that the lexicon contains both “南京 (Nanjing)" and “南京市 (Nanjing City)". Then, when counting word frequency on the sequence “南京市长江大桥", we will not add the frequency of “南京" since it is covered by “南京市" in the sequence. This can avoid the situation that the frequency of “南京" is definitely higher than “南京市". Finally, we get the weighted representation of the word set $\mathcal {S}$ by:
where
Here, we perform weight normalization on all words of the four word sets to allow them compete with each other across sets.
Further, we have tried to introducing a smoothing to the weight of each word to increase the weights of infrequent words. Specifically, we add a constant $c$ into the frequency of each word and re-define $\mathbf {v}^s$ by:
where
We set $c$ to the value that there are 10% of training words occurring less than $c$ times within the statistic data set. In summary, our method mainly contains the following four steps. Firstly, we scan each input sentence with the word lexicon, obtaining the four 'BMES' word sets for each character of the sentence. Secondly, we look up the frequency of each word counted on the statistic data set. Thirdly, we obtain the vector representation of the four word sets of each character according to Eq. (DISPLAY_FORM22), and add it to the character representation according to Eq. (DISPLAY_FORM20). Finally, based on the augmented character representations, we perform sequence labeling using any appropriate neural sequence labeling model, like LSTM-based sequence modeling layer + CRF label inference layer.
<<</Proposed Method>>>
<<<Experiments>>>
<<<Experiment Design>>>
Firstly, we performed a development study on our method with the LSTM-based sequence modeling layer, in order to compare the implementations of $\mathbf {v}^s$ and to determine whether or not to use character bigrams in our method. Decision made in this step will be applied to the following experiments. Secondly, we verified the computational efficiency of our method compared with Lattice-LSTM and LR-CNN BIBREF20, which is a followee of Lattice-LSTM for faster inference speed. Thirdly, we verified the effectiveness of our method by comparing its performance with that of Lattice-LSTM and other comparable models on four benchmark Chinese NER data sets. Finally, we verified the applicability of our method to different sequence labeling models.
<<</Experiment Design>>>
<<<Experiment Setup>>>
Most experimental settings in this work follow the protocols of Lattice-LSTM BIBREF0, including tested datasets, compared baselines, evaluation metrics (P, R, F1), and so on. To make this work self-completed, we concisely illustrate some primary settings of this work.
<<<Datasets>>>
The methods were evaluated on four Chinese NER datasets, including OntoNotes BIBREF21, MSRA BIBREF22, Weibo NER BIBREF23, BIBREF24, and Resume NER BIBREF0. OntoNotes and MSRA are from the newswire domain, where gold-standard segmentation is available for training data. For OntoNotes, gold segmentation is also available for development and testing data. Weibo NER and Resume NER are from social media and resume, respectively. There is no gold standard segmentation in these two datasets. Table TABREF26 shows statistic information of these datasets. As for the lexicon, we used the same one as Lattice-LSTM, which contains 5.7k single-character words, 291.5k two-character words, 278.1k three-character words, and 129.1k other words.
<<</Datasets>>>
<<<Implementation Detail>>>
When applying the LSTM-based sequence modeling layer, we followed most implementation protocols of Lattice-LSTM, including character and word embedding sizes, dropout, embedding initialization, and LSTM layer number. The hidden size was set to 100 for Weibo and 256 for the rest three datasets. The learning rate was set to 0.005 for Weibo and Resume and 0.0015 for OntoNotes and MSRA with Adamax BIBREF25.
When applying the CNN- and transformer- based sequence modeling layers, most hyper-parameters were the same as those used in the LSTM-based model. In addition, the layer number $L$ for the CNN-based model was set to 4, and that for transformer-based model was set to 2 with h=4 parallel attention layers. Kernel number $k_f$ of the CNN-based model was set to 512 for MSRA and 128 for the other datasets in all layers.
<<</Implementation Detail>>>
<<</Experiment Setup>>>
<<<Development Experiments>>>
In this experiment, we compared the implementations of $\mathbf {v}^s$ with the LSTM-based sequence modeling layer. In addition, we study whether or not character bigrams can bring improvement to our method.
Table TABREF31 shows performance of three implementations of $\mathbf {v}^s$ without using character bigrams. From the table, we can see that the weighted pooling algorithm performs generally better than the other two implementations. Of course, we may obtain better results with the smoothed weighted pooling algorithm by reducing the value of $c$ (when $c=0$, it is equivalent to the weighted pooling algorithm). We did not do so for two reasons. The first one is to guarantee the generality of our system for unexplored tasks. The second one is that the performance of the weighted pooling algorithm is good enough compared with other state-of-the-art baselines. Therefore, in the following experiments, we in default applied the weighted pooling algorithm to implement $\mathbf {v}^s$.
Figure FIGREF32 shows the F1-score of our method against the number of training iterations when using character bigram or not. From the figure, we can see that additionally introducing character bigrams cannot bring considerable improvement to our method. A possible explanation of this phenomenon is that the introduced word information by our proposed method has covered the bichar information. Therefore, in the following experiments, we did not use bichar in our method.
<<</Development Experiments>>>
<<<Computational Efficiency Study>>>
Table TABREF34 shows the inference speed of our method when implementing the sequnece modeling layer with the LSTM-based, CNN-based, and Transformer-based architecture, respectively. The speed was evaluated by average sentences per second using a GPU (NVIDIA TITAN X). For a fair comparison with Lattice-LSTM and LR-CNN, we set the batch size of our method to 1 at inference time. From the table, we can see that our method has a much faster inference speed than Lattice-LSTM when using the LSTM-based sequence modeling layer, and it was also much faster than LR-CNN, which used an CNN architecture to implement the sequence modeling layer. And as expected, our method with the CNN-based sequence modeling layer showed some advantage in inference speed than those with the LSTM-based and Transformer-based sequence model layer.
<<</Computational Efficiency Study>>>
<<<Effectiveness Study>>>
Table TABREF37$-$TABREF43 show the performance of method with the LSTM-based sequence modeling layer compared with Lattice-LSTM and other comparative baselines.
<<<OntoNotes.>>>
Table TABREF37 shows results on OntoNotes, which has gold segmentation for both training and testing data. The methods of the “Gold seg" and "Auto seg" group are word-based that build on the gold word segmentation results and the automatic segmentation results, respectively. The automatic segmentation results were generated by the segmenter trained on training data of OntoNotes. Methods of the "No seg" group are character-based. From the table, we can obtain several informative observations. First, by replacing the gold segmentation with the automatically generated segmentation, the F1-score of the Word-based (LSTM) + char + bichar model decreased from 75.77% to 71.70%. This shows the problem of the practice that treats the predicted word segmentation result as the true one for the word-based Chinese NER. Second, the Char-based (LSTM)+bichar+ExSoftword model achieved a 71.89% to 72.40% improvement over the Char-based (LSTM)+bichar+softword baseline on the F1-score. This indicates the feasibility of the naive extension of ExSoftword to softword. However, it still greatly underperformed Lattice-LSTM, showing its deficiency in utilizing word information. Finally, our proposed method, which is a further extension of Exsoftword, obtained a statistically significant improvement over Lattice-LSTM and even performed similarly to those word-based methods with gold segmentation, verifying its effectiveness on this data set.
<<</OntoNotes.>>>
<<<MSRA.>>>
Table TABREF40 shows results on MSRA. The word-based methods were built on the automatic segmentation results generated by the segmenter trained on training data of MSRA. Compared methods included the best statistical models on this data set, which leveraged rich handcrafted features BIBREF28, BIBREF29, BIBREF30, character embedding features BIBREF31, and radical features BIBREF32. From the table, we observe that our method obtained a statistically significant improvement over Lattice-LSTM and other comparative baselines on the recall and F1-score, verifying the effectiveness of our method on this data set.
<<</MSRA.>>>
<<<Weibo/Resume.>>>
Table TABREF42 shows results on Weibo NER, where NE, NM, and Overall denote F1-scores for named entities, nominal entities (excluding named entities) and both, respectively. The existing state-of-the-art system BIBREF19 explored rich embedding features, cross-domain data, and semi-supervised data. From the table, we can see that our proposed method achieved considerable improvement over the compared baselines on this data set. Table TABREF43 shows results on Resume. Consistent with observations on the other three tested data sets, our proposed method significantly outperformed Lattice-LSTM and the other comparable methods on this data set.
<<</Weibo/Resume.>>>
<<</Effectiveness Study>>>
<<<Transferability Study>>>
Table TABREF46 shows performance of our method with different sequence modeling architectures. From the table, we can first see that the LSTM-based architecture performed better than the CNN- and transformer- based architectures. In addition, our methods with different sequence modeling layers consistently outperformed their corresponding ExSoftword baselines. This shows that our method is applicable to different neural sequence modeling architectures for exploiting lexicon information.
<<</Transferability Study>>>
<<</Experiments>>>
<<<Conclusion>>>
In this work, we address the computational efficiency for utilizing word lexicon in Chinese NER. To achieve a high-performing NER system with fast inference speed, we proposed to adding lexicon information into the character representation and keeping the input form of a sentence as a chained sequence. Experimental study on four benchmark Chinese NER datasets shows that our method can obtain faster inference speed than the comparative methods and at the same time, achieve high performance. It also shows that our methods can apply to different neural sequence labeling models for Chinese NER.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Conclusion, Experiments"
],
"type": "disordered_section"
}
|
1908.05969
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Simplify the Usage of Lexicon in Chinese NER
<<<Abstract>>>
Recently, many works have tried to utilizing word lexicon to augment the performance of Chinese named entity recognition (NER). As a representative work in this line, Lattice-LSTM \cite{zhang2018chinese} has achieved new state-of-the-art performance on several benchmark Chinese NER datasets. However, Lattice-LSTM suffers from a complicated model architecture, resulting in low computational efficiency. This will heavily limit its application in many industrial areas, which require real-time NER response. In this work, we ask the question: if we can simplify the usage of lexicon and, at the same time, achieve comparative performance with Lattice-LSTM for Chinese NER? ::: Started with this question and motivated by the idea of Lattice-LSTM, we propose a concise but effective method to incorporate the lexicon information into the vector representations of characters. This way, our method can avoid introducing a complicated sequence modeling architecture to model the lexicon information. Instead, it only needs to subtly adjust the character representation layer of the neural sequence model. Experimental study on four benchmark Chinese NER datasets shows that our method can achieve much faster inference speed, comparative or better performance over Lattice-LSTM and its follwees. It also shows that our method can be easily transferred across difference neural architectures.
<<</Abstract>>>
<<<Introduction>>>
Named Entity Recognition (NER) is concerned with identifying named entities, such as person, location, product, and organization names, in unstructured text. In languages where words are naturally separated (e.g., English), NER was conventionally formulated as a sequence labeling problem, and the state-of-the-art results have been achieved by those neural-network-based models BIBREF1, BIBREF2, BIBREF3, BIBREF4.
Compared with NER in English, Chinese NER is more difficult since sentences in Chinese are not previously segmented. Thus, one common practice in Chinese NER is first performing word segmentation using an existing CWS system and then applying a word-level sequence labeling model to the segmented sentence BIBREF5, BIBREF6. However, it is inevitable that the CWS system will wrongly segment the query sequence. This will, in turn, result in entity boundary detection errors and even entity category prediction errors in the following NER. Take the character sequence “南京市 (Nanjing) / 长江大桥 (Yangtze River Bridge)" as an example, where “/" indicates the gold segmentation result. If the sequence is segmented into “南京 (Nanjing) / 市长 (mayor) / 江大桥 (Daqiao Jiang)", the word-based NER system is definitely not able to correctly recognize “南京市 (Nanjing)" and “长江大桥 (Yangtze River Bridge)" as two entities of the location type. Instead, it is possible to incorrectly treat “南京 (Nanjing)" as a location entity and predict “江大桥 (Daqiao Jiang)" to be a person's name. Therefore, some works resort to performing Chinese NER directly on the character level, and it has been shown that this practice can achieve better performance BIBREF7, BIBREF8, BIBREF9, BIBREF0.
A drawback of the purely character-based NER method is that word information, which has been proved to be useful, is not fully exploited. With this consideration, BIBREF0 proposed to incorporating word lexicon into the character-based NER model. In addition, instead of heuristically choosing a word for the character if it matches multiple words of the lexicon, they proposed to preserving all matched words of the character, leaving the following NER model to determine which matched word to apply. To achieve this, they introduced an elaborate modification to the LSTM-based sequence modeling layer of the LSTM-CRF model BIBREF1 to jointly model the character sequence and all of its matched words. Experimental studies on four public Chinese NER datasets show that Lattice-LSTM can achieve comparative or better performance on Chinese NER over existing methods.
Although successful, there exists a big problem in Lattice-LSTM that limits its application in many industrial areas, where real-time NER responses are needed. That is, its model architecture is quite complicated. This slows down its inference speed and makes it difficult to perform training and inference in parallel. In addition, it is far from easy to transfer the structure of Lattice-LSTM to other neural-network architectures (e.g., convolutional neural networks and transformers), which may be more suitable for some specific datasets.
In this work, we aim to find a easier way to achieve the idea of Lattice-LSTM, i.e., incorporating all matched words of the sentence to the character-based NER model. The first principle of our method design is to achieve a fast inference speed. To this end, we propose to encoding the matched words, obtained from the lexicon, into the representations of characters. Compared with Lattice-LSTM, this method is more concise and easier to implement. It can avoid complicated model architecture design thus has much faster inference speed. It can also be quickly adapted to any appropriate neural architectures without redesign. Given an existing neural character-based NER model, we only have to modify its character representation layer to successfully introduce the word lexicon. In addition, experimental studies on four public Chinese NER datasets show that our method can even achieve better performance than Lattice-LSTM when applying the LSTM-CRF model. Our source code is published at https://github.com/v-mipeng/LexiconAugmentedNER.
<<</Introduction>>>
<<<Generic Character-based Neural Architecture for Chinese NER>>>
In this section, we provide a concise description of the generic character-based neural NER model, which conceptually contains three stacked layers. The first layer is the character representation layer, which maps each character of a sentence into a dense vector. The second layer is the sequence modeling layer. It plays the role of modeling the dependence between characters, obtaining a hidden representation for each character. The final layer is the label inference layer. It takes the hidden representation sequence as input and outputs the predicted label (with probability) for each character. We detail these three layers below.
<<<Character Representation Layer>>>
For a character-based Chinese NER model, the smallest unit of a sentence is a character and the sentence is seen as a character sequence $s=\lbrace c_1, \cdots , c_n\rbrace \in \mathcal {V}_c$, where $\mathcal {V}_c$ is the character vocabulary. Each character $c_i$ is represented using a dense vector (embedding):
where $\mathbf {e}^{c}$ denotes the character embedding lookup table.
<<<Char + bichar.>>>
In addition, BIBREF0 has proved that character bigrams are useful for representing characters, especially for those methods not use word information. Therefore, it is common to augment the character representation with bigram information by concatenating bigram embeddings with character embeddings:
where $\mathbf {e}^{b}$ denotes the bigram embedding lookup table, and $\oplus $ denotes the concatenation operation. The sequence of character representations $\mathbf {\mathrm {x}}_i^c$ form the matrix representation $\mathbf {\mathrm {x}}^s=\lbrace \mathbf {\mathrm {x}}_1^c, \cdots , \mathbf {\mathrm {x}}_n^c\rbrace $ of $s$.
<<</Char + bichar.>>>
<<</Character Representation Layer>>>
<<<Sequence Modeling Layer>>>
The sequence modeling layer models the dependency between characters built on vector representations of the characters. In this work, we explore the applicability of our method to three popular architectures of this layer: the LSTM-based, the CNN-based, and the transformer-based.
<<<LSTM-based>>>
The bidirectional long-short term memory network (BiLSTM) is one of the most commonly used architectures for sequence modeling BIBREF10, BIBREF3, BIBREF11. It contains two LSTM BIBREF12 cells that model the sequence in the left-to-right (forward) and right-to-left (backward) directions with two distinct sets of parameters. Here, we precisely show the definition of the forward LSTM:
where $\sigma $ is the element-wise sigmoid function and $\odot $ represents element-wise product. $\mathbf {\mathrm {\mathrm {W}}} \in {\mathbf {\mathrm {\mathbb {R}}}^{4k_h\times (k_h+k_w)}}$ and $\mathbf {\mathrm {\mathrm {b}}}\in {\mathbf {\mathrm {\mathbb {R}}}^{4k_h}}$ are trainable parameters. The backward LSTM shares the same definition as the forward one but in an inverse sequence order. The concatenated hidden states at the $i^{th}$ step of the forward and backward LSTMs $\mathbf {\mathrm {h}}_i=[\overrightarrow{\mathbf {\mathrm {h}}}_i \oplus \overleftarrow{\mathbf {\mathrm {h}}}_i]$ forms the context-dependent representation of $c_i$.
<<</LSTM-based>>>
<<<CNN-based>>>
Another popular architecture for sequence modeling is the convolution network BIBREF13, which has been proved BIBREF14 to be effective for Chinese NER. In this work, we apply a convolutional layer to model trigrams of the character sequence and gradually model its multigrams by stacking multiple convolutional layers. Specifically, let $\mathbf {\mathrm {h}}^l_i$ denote the hidden representation of $c_i$ in the $l^{th}$ layer with $\mathbf {\mathrm {h}}_i^0=\mathbf {\mathrm {x}}^c_i$, and $\mathbf {\mathrm {F}}^l \in \mathbb {R}^{k_l \times k_c \times 3}$ denote the corresponding filter used in this layer. To obtain the hidden representation $\mathbf {\mathrm {h}}^{l+1}_i$ of $c_i$ in the $(l+1)^{th}$ layer, it takes the convolution of $\mathbf {\mathrm {F}}^l$ over the 3-gram representation:
where $\mathbf {\mathrm {h}}^l_{<i-1, i+1>} = [\mathbf {\mathrm {h}}^l_{i-1}; \mathbf {\mathrm {h}}^l_{i}; \mathbf {\mathrm {h}}^l_{i+1}]$ and $\langle A,B \rangle _i=\mbox{Tr}(AB[i, :, :]^T)$. This operation applies $L$ times, obtaining the final context-dependent representation, $\mathbf {\mathrm {h}}_i = \mathbf {\mathrm {h}}_i^L$, of $c_i$.
<<</CNN-based>>>
<<<Transformer-based>>>
Transformer BIBREF15 is originally proposed for sequence transduction, on which it has shown several advantages over the recurrent or convolutional neural networks. Intrinsically, it can also be applied to the sequence labeling task using only its encoder part.
In similar, let $\mathbf {\mathrm {h}}^l_i$ denote the hidden representation of $c_i$ in the $l^{th}$ layer with $\mathbf {\mathrm {h}}_i^0=\mathbf {\mathrm {x}}^c_i$, and $f^l$ denote a feedforward module used in this layer. To obtain the hidden representation matrix $\mathbf {\mathrm {h}}^{l+1}$ of $s$ in the $(l+1)^{th}$ layer, it takes the self-attention of $\mathbf {\mathrm {h}}^l$:
where $d^l$ is the dimension of $\mathbf {\mathrm {h}}^l_i$. This process applies $L$ times, obtaining $\mathbf {\mathrm {h}}^L$. After that, the position information of each character $c_i$ is introduced into $\mathbf {\mathrm {h}}^L_i$ to obtain its final context-dependent representation $\mathbf {\mathrm {h}}_i$:
where $PE_i=sin(i/1000^{2j/d^L}+j\%2\cdot \pi /2)$. We recommend you to refer to the excellent guides “The Annotated Transformer.” for more implementation detail of this architecture.
<<</Transformer-based>>>
<<</Sequence Modeling Layer>>>
<<<Label Inference Layer>>>
On top of the sequence modeling layer, a sequential conditional random field (CRF) BIBREF16 layer is applied to perform label inference for the character sequence as a whole:
where $\mathcal {Y}_s$ denotes all possible label sequences of $s$, $\phi _{t}({y}^\prime , {y}|\mathbf {\mathrm {s}})=\exp (\mathbf {w}^T_{{y}^\prime , {y}} \mathbf {\mathrm {h}}_t + b_{{y}^\prime , {y}})$, where $\mathbf {w}_{{y}^\prime , {y}}$ and $ b_{{y}^\prime , {y}}$ are trainable parameters corresponding to the label pair $({y}^\prime , {y})$, and $\mathbf {\theta }$ denotes model parameters. For label inference, it searches for the label sequence $\mathbf {\mathrm {y}}^{*}$ with the highest conditional probability given the input sequence ${s}$:
which can be efficiently solved using the Viterbi algorithm BIBREF17.
<<</Label Inference Layer>>>
<<</Generic Character-based Neural Architecture for Chinese NER>>>
<<<Lattice-LSTM for Chinese NER>>>
Lattice-LSTM designs to incorporate word lexicon into the character-based neural sequence labeling model. To achieve this purpose, it first performs lexicon matching on the input sentence. It will add an directed edge from $c_i$ to $c_j$, if the sub-sequence $\lbrace c_i, \cdots , c_j\rbrace $ of the sentence matches a word of the lexicon for $i < j$. And it preserves all lexicon matching results on a character by allowing the character to connect with multiple characters. Concretely, for a sentence $\lbrace c_1, c_2, c_3, c_4, c_5\rbrace $, if both its sub-sequences $\lbrace c_1, c_2, c_3, c_4\rbrace $ and $\lbrace c_2, c_3, c_4\rbrace $ match a word of the lexicon, it will add a directed edge from $c_1$ to $c_4$ and a directed edge from $c_2$ to $c_4$. This practice will turn the input form of the sentence from a chained sequence into a graph.
To model the graph-based input, Lattice-LSTM accordingly modifies the LSTM-based sequence modeling layer. Specifically, let $s_{<*, j>}$ denote the list of sub-sequences of a sentence $s$ that match the lexicon and end with $c_j$, $\mathbf {\mathrm {h}}_{<*, j>}$ denote the corresponding hidden state list $\lbrace \mathbf {\mathrm {h}}_i, \forall s_{<i, j>} \in s_{<*, j>}\rbrace $, and $\mathbf {\mathrm {c}}_{<*, j>}$ denote the corresponding memory cell list $\lbrace \mathbf {\mathrm {c}}_i, \forall s_{<i, j>} \in s_{<*, j>}\rbrace $. In Lattice-LSTM, the hidden state $\mathbf {\mathrm {h}}_j$ and memory cell $\mathbf {\mathrm {c}}_j$ of $c_j$ are now updated by:
where $f$ is a simplified representation of the function used by Lattice-LSTM to perform memory update. Note that, in the updating process, the inputs now contains current step character representation $\mathbf {\mathrm {x}}_j^c$, last step hidden state $\mathbf {\mathrm {h}}_{j-1}$ and memory cell $\mathbf {\mathrm {c}}_{j-1}$, and lexicon matched sub-sequences $s_{<*, j>}$ and their corresponding hidden state and memory cell lists, $\mathbf {\mathrm {h}}_{<*, j>}$ and $\mathbf {\mathrm {c}}_{<*, j>}$. We refer you to the paper of Lattice-LSTM BIBREF0 for more detail of the implementation of $f$.
A problem of Lattice-LSTM is that its speed of sequence modeling is much slower than the normal LSTM architecture since it has to additionally model $s_{<*, j>}$, $\mathbf {\mathrm {h}}_{<*, j>}$, and $\mathbf {\mathrm {c}}_{<*, j>}$ for memory update. In addition, considering the implementation of $f$, it is hard for Lattice-LSTM to process multiple sentences in parallel (in the published implementation of Lattice-LSTM, the batch size was set to 1). This raises the necessity to design a simpler way to achieve the function of Lattice-LSTM for incorporating the word lexicon into the character-based NER model.
<<</Lattice-LSTM for Chinese NER>>>
<<<Proposed Method>>>
In this section, we introduce our method, which aims to keep the merit of Lattice-LSTM and at the same time, make the computation efficient. We will start the description of our method from our thinking on Lattice-LSTM.
From our view, the advance of Lattice-LSTM comes from two points. The first point is that it preserve all possible matching words for each character. This can avoid the error propagation introduced by heuristically choosing a matching result of the character to the NER system. The second point is that it can introduce pre-trained word embeddings to the system, which bring great help to the final performance. While the disadvantage of Lattice-LSTM is that it turns the input form of a sentence from a chained sequence into a graph. This will greatly increase the computational cost for sentence modeling. Therefore, the design of our method should try to keep the chained input form of the sentence and at the same time, achieve the above two advanced points of Lattice-LSTM.
With this in mind, our method design was firstly motivated by the Softword technique, which was originally used for incorporating word segmentation information into downstream tasks BIBREF18, BIBREF19. Precisely, the Softword technique augments the representation of a character with the embedding of its corresponding segmentation label:
Here, $seg(c_j) \in \mathcal {Y}_{seg}$ denotes the segmentation label of the character $c_j$ predicted by the word segmentor, $\mathbf {e}^{seg}$ denotes the segmentation label embedding lookup table, and commonly $\mathcal {Y}_{seg}=\lbrace \text{B}, \text{M}, \text{E}, \text{S}\rbrace $ with B, M, E indicating that the character is the beginning, middle, and end of a word, respectively, and S indicating that the character itself forms a single-character word.
The first idea we come out based on the Softword technique is to construct a word segmenter using the lexicon and allow a character to have multiple segmentation labels. Take the sentence $s=\lbrace c_1, c_2, c_3, c_4, c_5\rbrace $ as an example. If both its sub-sequences $\lbrace c_1, c_2, c_3, c_4\rbrace $ and $\lbrace c_3, c_4\rbrace $ match a word of the lexicon, then the segmentation label sequence of $s$ using the lexicon is $segs(s)=\lbrace \lbrace \text{B}\rbrace , \lbrace \text{M}\rbrace , \lbrace \text{B}, \text{M}\rbrace , \lbrace \text{E}\rbrace , \lbrace \text{O}\rbrace \rbrace $. Here, $segs(s)_1=\lbrace \text{B}\rbrace $ indicates that there is at least one sub-sequence of $s$ matching a word of the lexicon and beginning with $c_1$, $segs(s)_3=\lbrace \text{B}, \text{M}\rbrace $ means that there is at least one sub-sequence of $s$ matching the lexicon and beginning with $c_3$ and there is also at least one lexicon matched sub-sequence in the middle of which $c_3$ occurs, and $segs(s)_5=\lbrace \text{O}\rbrace $ means that there is no sub-sequence of $s$ that matches the lexicon and contains $c_5$. The character representation is then obtained by:
where $\mathbf {e}^{seg}(segs(s)_j)$ is a 5-dimensional binary vector with each dimension corresponding to an item of $\lbrace \text{B, M, E, S, O\rbrace }$. We call this method as ExSoftword in the following.
However, through the analysis of ExSoftword, we can find out that the ExSoftword method cannot fully inherit the two merits of Lattice-LSTM. Firstly, it cannot not introduce pre-trained word embeddings. Secondly, though it tries to keep all the lexicon matching results by allowing a character to have multiple segmentation labels, it still loses lots of information. In many cases, we cannot restore the matching results from the segmentation label sequence. Consider the case that in the sentence $s=\lbrace c_1, c_2, c_3, c_4\rbrace $, $\lbrace c_1, c_2, c_3\rbrace $ and $\lbrace c_2, c_3, c_4\rbrace $ match the lexicon. In this case, $segs(s) = \lbrace \lbrace \text{B}\rbrace , \lbrace \text{B}, \text{M}\rbrace , \lbrace \text{M}, \text{E}\rbrace , \lbrace \text{E}\rbrace \rbrace $. However, based on $segs(s)$ and $s$, we cannot say that it is $\lbrace c_1, c_2, c_3\rbrace $ and $\lbrace c_2, c_3, c_4\rbrace $ matching the lexicon since we will obtain the same segmentation label sequence when $\lbrace c_1, c_2, c_3, c_4\rbrace $ and $\lbrace c_2,c_3\rbrace $ match the lexicon.
To this end, we propose to preserving not only the possible segmentation labels of a character but also their corresponding matched words. Specifically, in this improved method, each character $c$ of a sentence $s$ corresponds to four word sets marked by the four segmentation labels “BMES". The word set $\rm {B}(c)$ consists of all lexicon matched words on $s$ that begin with $c$. Similarly, $\rm {M}(c)$ consists of all lexicon matched words in the middle of which $c$ occurs, $\rm {E}(c)$ consists of all lexicon matched words that end with $c$, and $\rm {S}(c)$ is the single-character word comprised of $c$. And if a word set is empty, we will add a special word “NONE" to it to indicate this situation. Consider the sentence $s=\lbrace c_1, \cdots , c_5\rbrace $ and suppose that $\lbrace c_1, c_2\rbrace $, $\lbrace c_1, c_2, c_3\rbrace $, $\lbrace c_2, c_3, c_4\rbrace $, and $\lbrace c_2, c_3, c_4, c_5\rbrace $ match the lexicon. Then, for $c_2$, $\rm {B}(c_2)=\lbrace \lbrace c_2, c_3, c_4\rbrace , \lbrace c_2, c_3, c_4, c_5\rbrace \rbrace $, $\rm {M}(c_2)=\lbrace \lbrace c_1, c_2, c_3\rbrace \rbrace $, $\rm {E}(c_2)=\lbrace \lbrace c_1, c_2\rbrace \rbrace $, and $\rm {S}(c_2)=\lbrace NONE\rbrace $. In this way, we can now introduce the pre-trained word embeddings and moreover, we can exactly restore the matching results from the word sets of each character.
The next step of the improved method is to condense the four word sets of each character into a fixed-dimensional vector. In order to retain information as much as possible, we choose to concatenate the representations of the four word sets to represent them as a whole and add it to the character representation:
Here, $\mathbf {v}^s$ denotes the function that maps a single word set to a dense vector.
This also means that we should map each word set into a fixed-dimensional vector. To achieve this purpose, we first tried the mean-pooling algorithm to get the vector representation of a word set $\mathcal {S}$:
Here, $\mathbf {e}^w$ denotes the word embedding lookup table. However, the empirical studies, as depicted in Table TABREF31, show that this algorithm performs not so well . Through the comparison with Lattice-LSTM, we find out that in Lattice-LSTM, it applies a dynamic attention algorithm to weigh each matched word related to a single character. Motivated by this practice, we propose to weighing the representation of each word in the word set to get the pooling representation of the word set. However, considering the computational efficiency, we do not want to apply a dynamical weighing algorithm, like attention, to get the weight of each word. With this in mind, we propose to using the frequency of the word as an indication of its weight. The basic idea beneath this algorithm is that the more times a character sequence occurs in the data, the more likely it is a word. Note that, the frequency of a word is a static value and can be obtained offline. This can greatly accelerate the calculation of the weight of each word (e.g., using a lookup table).
Specifically, let $w_c$ denote the character sequence constituting $w$ and $z(w)$ denote the frequency of $w_c$ occurring in the statistic data set (in this work, we combine training and testing data of a task to construct the statistic data set. Of course, if we have unlabelled data for the task, we can take the unlabeled data as the statistic data set). Note that, we do not add the frequency of $w$ if $w_c$ is covered by that of another word of the lexicon in the sentence. For example, suppose that the lexicon contains both “南京 (Nanjing)" and “南京市 (Nanjing City)". Then, when counting word frequency on the sequence “南京市长江大桥", we will not add the frequency of “南京" since it is covered by “南京市" in the sequence. This can avoid the situation that the frequency of “南京" is definitely higher than “南京市". Finally, we get the weighted representation of the word set $\mathcal {S}$ by:
where
Here, we perform weight normalization on all words of the four word sets to allow them compete with each other across sets.
Further, we have tried to introducing a smoothing to the weight of each word to increase the weights of infrequent words. Specifically, we add a constant $c$ into the frequency of each word and re-define $\mathbf {v}^s$ by:
where
We set $c$ to the value that there are 10% of training words occurring less than $c$ times within the statistic data set. In summary, our method mainly contains the following four steps. Firstly, we scan each input sentence with the word lexicon, obtaining the four 'BMES' word sets for each character of the sentence. Secondly, we look up the frequency of each word counted on the statistic data set. Thirdly, we obtain the vector representation of the four word sets of each character according to Eq. (DISPLAY_FORM22), and add it to the character representation according to Eq. (DISPLAY_FORM20). Finally, based on the augmented character representations, we perform sequence labeling using any appropriate neural sequence labeling model, like LSTM-based sequence modeling layer + CRF label inference layer.
<<</Proposed Method>>>
<<<Experiments>>>
<<<Experiment Design>>>
Firstly, we performed a development study on our method with the LSTM-based sequence modeling layer, in order to compare the implementations of $\mathbf {v}^s$ and to determine whether or not to use character bigrams in our method. Decision made in this step will be applied to the following experiments. Secondly, we verified the computational efficiency of our method compared with Lattice-LSTM and LR-CNN BIBREF20, which is a followee of Lattice-LSTM for faster inference speed. Thirdly, we verified the effectiveness of our method by comparing its performance with that of Lattice-LSTM and other comparable models on four benchmark Chinese NER data sets. Finally, we verified the applicability of our method to different sequence labeling models.
<<</Experiment Design>>>
<<<Experiment Setup>>>
Most experimental settings in this work follow the protocols of Lattice-LSTM BIBREF0, including tested datasets, compared baselines, evaluation metrics (P, R, F1), and so on. To make this work self-completed, we concisely illustrate some primary settings of this work.
<<<Datasets>>>
The methods were evaluated on four Chinese NER datasets, including OntoNotes BIBREF21, MSRA BIBREF22, Weibo NER BIBREF23, BIBREF24, and Resume NER BIBREF0. OntoNotes and MSRA are from the newswire domain, where gold-standard segmentation is available for training data. For OntoNotes, gold segmentation is also available for development and testing data. Weibo NER and Resume NER are from social media and resume, respectively. There is no gold standard segmentation in these two datasets. Table TABREF26 shows statistic information of these datasets. As for the lexicon, we used the same one as Lattice-LSTM, which contains 5.7k single-character words, 291.5k two-character words, 278.1k three-character words, and 129.1k other words.
<<</Datasets>>>
<<<Implementation Detail>>>
When applying the LSTM-based sequence modeling layer, we followed most implementation protocols of Lattice-LSTM, including character and word embedding sizes, dropout, embedding initialization, and LSTM layer number. The hidden size was set to 100 for Weibo and 256 for the rest three datasets. The learning rate was set to 0.005 for Weibo and Resume and 0.0015 for OntoNotes and MSRA with Adamax BIBREF25.
When applying the CNN- and transformer- based sequence modeling layers, most hyper-parameters were the same as those used in the LSTM-based model. In addition, the layer number $L$ for the CNN-based model was set to 4, and that for transformer-based model was set to 2 with h=4 parallel attention layers. Kernel number $k_f$ of the CNN-based model was set to 512 for MSRA and 128 for the other datasets in all layers.
<<</Implementation Detail>>>
<<</Experiment Setup>>>
<<<Development Experiments>>>
In this experiment, we compared the implementations of $\mathbf {v}^s$ with the LSTM-based sequence modeling layer. In addition, we study whether or not character bigrams can bring improvement to our method.
Table TABREF31 shows performance of three implementations of $\mathbf {v}^s$ without using character bigrams. From the table, we can see that the weighted pooling algorithm performs generally better than the other two implementations. Of course, we may obtain better results with the smoothed weighted pooling algorithm by reducing the value of $c$ (when $c=0$, it is equivalent to the weighted pooling algorithm). We did not do so for two reasons. The first one is to guarantee the generality of our system for unexplored tasks. The second one is that the performance of the weighted pooling algorithm is good enough compared with other state-of-the-art baselines. Therefore, in the following experiments, we in default applied the weighted pooling algorithm to implement $\mathbf {v}^s$.
Figure FIGREF32 shows the F1-score of our method against the number of training iterations when using character bigram or not. From the figure, we can see that additionally introducing character bigrams cannot bring considerable improvement to our method. A possible explanation of this phenomenon is that the introduced word information by our proposed method has covered the bichar information. Therefore, in the following experiments, we did not use bichar in our method.
<<</Development Experiments>>>
<<<Computational Efficiency Study>>>
Table TABREF34 shows the inference speed of our method when implementing the sequnece modeling layer with the LSTM-based, CNN-based, and Transformer-based architecture, respectively. The speed was evaluated by average sentences per second using a GPU (NVIDIA TITAN X). For a fair comparison with Lattice-LSTM and LR-CNN, we set the batch size of our method to 1 at inference time. From the table, we can see that our method has a much faster inference speed than Lattice-LSTM when using the LSTM-based sequence modeling layer, and it was also much faster than LR-CNN, which used an CNN architecture to implement the sequence modeling layer. And as expected, our method with the CNN-based sequence modeling layer showed some advantage in inference speed than those with the LSTM-based and Transformer-based sequence model layer.
<<</Computational Efficiency Study>>>
<<<Effectiveness Study>>>
Table TABREF37$-$TABREF43 show the performance of method with the LSTM-based sequence modeling layer compared with Lattice-LSTM and other comparative baselines.
<<<OntoNotes.>>>
Table TABREF37 shows results on OntoNotes, which has gold segmentation for both training and testing data. The methods of the “Gold seg" and "Auto seg" group are word-based that build on the gold word segmentation results and the automatic segmentation results, respectively. The automatic segmentation results were generated by the segmenter trained on training data of OntoNotes. Methods of the "No seg" group are character-based. From the table, we can obtain several informative observations. First, by replacing the gold segmentation with the automatically generated segmentation, the F1-score of the Word-based (LSTM) + char + bichar model decreased from 75.77% to 71.70%. This shows the problem of the practice that treats the predicted word segmentation result as the true one for the word-based Chinese NER. Second, the Char-based (LSTM)+bichar+ExSoftword model achieved a 71.89% to 72.40% improvement over the Char-based (LSTM)+bichar+softword baseline on the F1-score. This indicates the feasibility of the naive extension of ExSoftword to softword. However, it still greatly underperformed Lattice-LSTM, showing its deficiency in utilizing word information. Finally, our proposed method, which is a further extension of Exsoftword, obtained a statistically significant improvement over Lattice-LSTM and even performed similarly to those word-based methods with gold segmentation, verifying its effectiveness on this data set.
<<</OntoNotes.>>>
<<<MSRA.>>>
Table TABREF40 shows results on MSRA. The word-based methods were built on the automatic segmentation results generated by the segmenter trained on training data of MSRA. Compared methods included the best statistical models on this data set, which leveraged rich handcrafted features BIBREF28, BIBREF29, BIBREF30, character embedding features BIBREF31, and radical features BIBREF32. From the table, we observe that our method obtained a statistically significant improvement over Lattice-LSTM and other comparative baselines on the recall and F1-score, verifying the effectiveness of our method on this data set.
<<</MSRA.>>>
<<<Weibo/Resume.>>>
Table TABREF42 shows results on Weibo NER, where NE, NM, and Overall denote F1-scores for named entities, nominal entities (excluding named entities) and both, respectively. The existing state-of-the-art system BIBREF19 explored rich embedding features, cross-domain data, and semi-supervised data. From the table, we can see that our proposed method achieved considerable improvement over the compared baselines on this data set. Table TABREF43 shows results on Resume. Consistent with observations on the other three tested data sets, our proposed method significantly outperformed Lattice-LSTM and the other comparable methods on this data set.
<<</Weibo/Resume.>>>
<<</Effectiveness Study>>>
<<<Transferability Study>>>
Table TABREF46 shows performance of our method with different sequence modeling architectures. From the table, we can first see that the LSTM-based architecture performed better than the CNN- and transformer- based architectures. In addition, our methods with different sequence modeling layers consistently outperformed their corresponding ExSoftword baselines. This shows that our method is applicable to different neural sequence modeling architectures for exploiting lexicon information.
<<</Transferability Study>>>
<<</Experiments>>>
<<<Conclusion>>>
In this work, we address the computational efficiency for utilizing word lexicon in Chinese NER. To achieve a high-performing NER system with fast inference speed, we proposed to adding lexicon information into the character representation and keeping the input form of a sentence as a chained sequence. Experimental study on four benchmark Chinese NER datasets shows that our method can obtain faster inference speed than the comparative methods and at the same time, achieve high performance. It also shows that our methods can apply to different neural sequence labeling models for Chinese NER.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Proposed Method, Abstract"
],
"type": "disordered_section"
}
|
1910.13215
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Transformer-based Cascaded Multimodal Speech Translation
<<<Abstract>>>
This paper describes the cascaded multimodal speech translation systems developed by Imperial College London for the IWSLT 2019 evaluation campaign. The architecture consists of an automatic speech recognition (ASR) system followed by a Transformer-based multimodal machine translation (MMT) system. While the ASR component is identical across the experiments, the MMT model varies in terms of the way of integrating the visual context (simple conditioning vs. attention), the type of visual features exploited (pooled, convolutional, action categories) and the underlying architecture. For the latter, we explore both the canonical transformer and its deliberation version with additive and cascade variants which differ in how they integrate the textual attention. Upon conducting extensive experiments, we found that (i) the explored visual integration schemes often harm the translation performance for the transformer and additive deliberation, but considerably improve the cascade deliberation; (ii) the transformer and cascade deliberation integrate the visual modality better than the additive deliberation, as shown by the incongruence analysis.
<<</Abstract>>>
<<<Introduction>>>
The recently introduced How2 dataset BIBREF2 has stimulated research around multimodal language understanding through the availability of 300h instructional videos, English subtitles and their Portuguese translations. For example, BIBREF3 successfully demonstrates that semantically rich action-based visual features are helpful in the context of machine translation (MT), especially in the presence of input noise that manifests itself as missing source words. Therefore, we hypothesize that a speech-to-text translation (STT) system may also benefit from the visual context, especially in the traditional cascaded framework BIBREF4, BIBREF5 where noisy automatic transcripts are obtained from an automatic speech recognition system (ASR) and further translated into the target language using a machine translation (MT) component. The dataset enables the design of such multimodal STT systems, since we have access to a bilingual corpora as well as the corresponding audio-visual stream. Hence, in this paper, we propose a cascaded multimodal STT with two components: (i) an English ASR system trained on the How2 dataset and (ii) a transformer-based BIBREF0 visually grounded MMT system.
MMT is a relatively new research topic which is interested in leveraging auxiliary modalities such as audio or vision in order to improve translation performance BIBREF6. MMT has proved effective in scenarios such as for disambiguation BIBREF7 or when the source sentences are corrupted BIBREF8. So far, MMT has mostly focused on integrating visual features into neural MT (NMT) systems using visual attention through convolutional feature maps BIBREF9, BIBREF10 or visual conditioning of encoder/decoder blocks through fully-connected features BIBREF11, BIBREF12, BIBREF13, BIBREF14.
Inspired by previous research in MMT, we explore several multimodal integration schemes using action-level video features. Specifically, we experiment with visually conditioning the encoder output and adding visual attention to the decoder. We further extend the proposed schemes to the deliberation variant BIBREF1 of the canonical transformer in two ways: additive and cascade multimodal deliberation, which are distinct in their textual attention regimes. Overall, the results show that multimodality in general leads to performance degradation for the canonical transformer and the additive deliberation variant, but can result in substantial improvements for the cascade deliberation. Our incongruence analysis BIBREF15 reveals that the transformer and cascade deliberation are more sensitive to and therefore more reliant on visual features for translation, whereas the additive deliberation is much less impacted. We also observe that incongruence sensitivity and translation performance are not necessarily correlated.
<<</Introduction>>>
<<<Methods>>>
In this section, we briefly describe the proposed multimodal speech translation system and its components.
<<<Automatic Speech Recognition>>>
The baseline ASR system that we use to obtain English transcripts is an attentive sequence-to-sequence architecture with a stacked encoder of 6 bidirectional LSTM layers BIBREF16. Each LSTM layer is followed by a tanh projection layer. The middle two LSTM layers apply a temporal subsampling BIBREF17 by skipping every other input, reducing the length of the sequence $\mathrm {X}$ from $T$ to $T/4$. All LSTM and projection layers have 320 hidden units. The forward-pass of the encoder produces the source encodings on top of which attention will be applied within the decoder. The hidden and cell states of all LSTM layers are initialized with 0. The decoder is a 2-layer stacked GRU BIBREF18, where the first GRU receives the previous hidden state of the second GRU in a transitional way. GRU layers, attention layer and embeddings have 320 hidden units. We share the input and output embeddings to reduce the number of parameters BIBREF19. At timestep $t\mathrm {=}0$, the hidden state of the first GRU is initialized with the average-pooled source encoding states.
<<</Automatic Speech Recognition>>>
<<<Deliberation-based NMT>>>
A human translator typically produces a translation draft first, and then refines it towards the final translation. The idea behind the deliberation networks BIBREF20 simulates this process by extending the conventional attentive encoder-decoder architecture BIBREF21 with a second pass refinement decoder. Specifically, the encoder first encodes a source sentence of length $N$ into a sequence of hidden states $\mathcal {H} = \lbrace h_1, h_2,\dots ,h_{N}\rbrace $ on top of which the first pass decoder applies the attention. The pre-softmax hidden states $\lbrace \hat{s}_1,\hat{s}_2,\dots ,\hat{s}_{M}\rbrace $ produced by the decoder leads to a first pass translation $\lbrace \hat{y}_1,\hat{y}_2,\dots , \hat{y}_{M}\rbrace $. The second pass decoder intervenes at this point and generates a second translation by attending separately to both $\mathcal {H}$ and the concatenated state vectors $\lbrace [\hat{s}_1;\hat{y}_1], [\hat{s}_2; \hat{y}_2],\dots ,[\hat{s}_{M}; \hat{y}_{M}]\rbrace $. Two context vectors are produced as a result, and they are joint inputs with $s_{t-1}$ (previous hidden state of ) and $y_{t-1}$ (previous output of ) to to yield $s_t$ and then $y_t$.
A transformer-based deliberation architecture is proposed by BIBREF1. It follows the same two-pass refinement process, with every second-pass decoder block attending to both the encoder output $\mathcal {H}$ and the first-pass pre-softmax hidden states $\mathcal {\hat{S}}$. However, it differs from BIBREF20 in that the actual first-pass translation $\hat{Y}$ is not used for the second-pass attention.
<<</Deliberation-based NMT>>>
<<<Multimodality>>>
<<<Visual Features>>>
We experiment with three types of video features, namely average-pooled vector representations (), convolutional layer outputs (), and Ten-Hot action category embeddings (). The features are provided by the How2 dataset using the following approach: a video is segmented into smaller parts of 16 frames each, and the segments are fed to a 3D ResNeXt-101 CNN BIBREF22, trained to recognise 400 action classes BIBREF23. The 2048-D fully-connected features are then averaged across the segments to obtain a single feature vector for the overall video.
In order to obtain the features, 16 equi-distant frames are sampled from a video, and they are then used as input to an inflated 3D ResNet-50 CNN BIBREF24 fine-tuned on the Moments in Time action video dataset. The CNN hence takes in a video and classifies it into one of 339 categories. The features, taken at the CONV$_4$ layer of the network, has a $7 \times 7 \times 2048$ dimensionality.
Higher-level semantic information can be more helpful than convolutional features. We apply the same CNN to a video as we do for features, but this time the focus is on the softmax layer output: we process the embedding matrix to keep the 10 most probable category embeddings intact while zeroing out the remaining ones. We call this representation ten-hot action category embeddings ().
<<</Visual Features>>>
<<<Integration Approaches>>>
Encoder with Additive Visual Conditioning (-) In this approach, inspired by BIBREF7, we add a projection of the visual features to each output of the vanilla transformer encoder (-). This projection is strictly linear from the 2048-D features to the 1024-D space in which the self attention hidden states reside, and the projection matrix is learned jointly with the translation model.
Decoder with Visual Attention (-) In order to accommodate attention to visual features at the decoder side and inspired by BIBREF25, we insert one layer of visual cross attention at a decoder block immediately before the fully-connected layer. We name the transformer decoder with such an extra layer as –, where this layer is immediately after the textual attention to the encoder output. Specifically, we experiment with attention to , and features separately. The visual attention is distributed across the 49 video regions in , the 339 action category word embeddings in , or the 32 rows in where we reshape the 2048-D vector into a $32 \times 64$ matrix.
<<</Integration Approaches>>>
<<<Multimodal Transformers>>>
The vanilla text-only transformer (-) is used as a baseline, and we design two variants: with additive visual conditioning (-) and with attention to visual features (-). A -features a -and a vanilla transformer decoder (-), therefore utilising visual information only at the encoder side. In contrast, a -is configured with a -and a –, exploiting visual cues only at the decoder. Figure FIGREF7 summarises the two approaches.
<<</Multimodal Transformers>>>
<<<Multimodal Deliberation>>>
Our multimodal deliberation models differ from each other in two ways: whether to use additive () BIBREF7 or cascade () textual deliberation to integrate the textual attention to the original input and to the first pass, and whether to employ visual attention (-) or additive visual conditioning (-) to integrate the visual features into the textual MT model. Figures FIGREF9 and FIGREF10 show the configurations of our additive and cascade deliberation models, respectively, each also showing the connections necessary for -and -.
Additive () & Cascade () Textual Deliberation
In an additive-deliberation second-pass decoder (–) block, the first layer is still self-attention, whereas the second layer is the addition of two separate attention sub-layers. The first sub-layer attends to the encoder output in the same way -does, while the attention of the second sub-layer is distributed across the concatenated first pass outputs and hidden states. The input to both sub-layers is the output of the self-attention layer, and the outputs of the sub-layers are summed as the final output and then (with a residual connection) fed to the visual attention layer if the decoder is multimodal or to the fully connected layer otherwise.
For the cascade version, the only difference is that, instead of two sub-layers, we have two separate, successive layers with the same functionalities.
It is worth mentioning that we introduce the attention to the first pass only at the initial three decoder blocks out of the total six of the second pass decoder (-), following BIBREF7.
Additive Visual Conditioning (-) & Visual Attention (-)
-and -are simply applying -and -respectively to a deliberation model, therefore more details have been introduced in Section SECREF5.
For -, similar to in -, we add a projection of the visual features to the output of -, and use -as the first pass decoder and either additive or cascade deliberation as the -.
For -, in a similar vein as -, the encoder in this setting is simply -and the first pass decoder is just -, but this time -is responsible for attending to the first pass output as well as the visual features. For both additive and cascade deliberation, a visual attention layer is inserted immediately before the fully-connected layer, so that the penultimate layer of a decoder block now attends to visual information.
<<</Multimodal Deliberation>>>
<<</Multimodality>>>
<<</Methods>>>
<<<Experiments>>>
<<<Dataset>>>
We stick to the default training/validation/test splits and the pre-extracted speech features for the How2 dataset, as provided by the organizers. As for the pre-processing, we lowercase the sentences and then tokenise them using Moses BIBREF26. We then apply subword segmentation BIBREF27 by learning separate English and Portuguese models with 20,000 merge operations each. The English corpus used when training the subword model consists of both the ground-truth video subtitles and the noisy transcripts produced by the underlying ASR system. We do not share vocabularies between the source and target domains. Finally for the post-processing step, we merge the subword tokens, apply recasing and detokenisation. The recasing model is a standard Moses baseline trained again on the parallel How2 corpus.
The baseline ASR system is trained on the How2 dataset as well. This system is then used to obtain noisy transcripts for the whole dataset, using beam-search with beam size of 10. The pre-processing pipeline for the ASR is different from the MT pipeline in the sense that the punctuations are removed and the subword segmentation is performed using SentencePiece BIBREF28 with a vocabulary size of 5,000. The test-set performance of this ASR is around 19% WER.
<<</Dataset>>>
<<<Training>>>
We train our transformer and deliberation models until convergence largely with transformer_big hyperparameters: 16 attention heads, 1024-D hidden states and a dropout of 0.1. During inference, we apply beam-search with beam size of 10. For deliberation, we first train the underlying transformer model until convergence, and use its weights to initialise the encoder and the first pass decoder. After freezing those weights, we train -until convergence. The reason for the partial freezing is that our preliminary experiments showed that it enabled better performance compared to updating the whole model. Following BIBREF20, we obtain 10-best samples from the first pass with beam-search for source augmentation during the training of -.
We train all the models on an Nvidia RTX 2080Ti with a batch size of 1024, a base learning rate of 0.02 with 8,000 warm-up steps for the Adam BIBREF29 optimiser, and a patience of 10 epochs for early stopping based on approx-BLEU () for the transformers and 3 epochs for the deliberation models. After the training finishes, we evaluate all the checkpoints on the validation set and compute the real BIBREF30 scores, based on which we select the best model for inference on the test set. The transformer and the deliberation models are based upon the library BIBREF31 (v1.3.0 RC1) as well as the vanilla transformer-based deliberation BIBREF20 and their multimodal variants BIBREF7.
<<</Training>>>
<<</Experiments>>>
<<<Results & Analysis>>>
<<<Quantitative Results>>>
We report tokenised results obtained using the multeval toolkit BIBREF32. We focus on single system performance and thus, do not perform any ensembling or checkpoint averaging.
The scores of the models are shown in Table TABREF17. Evident from the table is that the best models overall are -and –with a score of 39.8, and the other multimodal transformers have slightly worse performance, showing score drops around 0.1. Also, none of the multimodal transformer systems are significantly different from the baseline, which is a sign of the limited extent to which visual features affect the output.
For additive deliberation (-), the performance variation is considerably larger: -and take the lead with 37.6 , but the next best system (-) plunges to 37.2. The other two (-& -) also have noticeably worse results (36.0 and 37.0). Overall, however, -is still similar to the transformers in that the baseline generally yields higher-quality translations.
Cascade deliberation, on the other hand, is different in that its text-only baseline is outperformed by most of its multimodal counterparts. Multimodality enables boosts as large as around 1 point in the cases of -and -, both of which achieve about 37.4 and are significantly different from the baseline.
Another observation is that the deliberation models as a whole lead to worse performance than the canonical transformers, with deterioration ranging from 2.3 (across -variants) to 3.5 (across -systems), which defies the findings of BIBREF7. We leave this to future investigations.
<<</Quantitative Results>>>
<<<Incongruence Analysis>>>
To further probe the effect of multimodality, we follow the incongruent decoding approach BIBREF15, where our multimodal models are fed with mismatched visual features. The general assumption is that a model will have learned to exploit visual information to help with its translation, if it shows substantial performance degradation when given wrong visual features. The results are reported in Table TABREF19.
Overall, there are considerable parallels between the transformers and the cascade deliberation models in terms of the incongruence effect, such as universal performance deterioration (ranging from 0.1 to 0.6 ) and more noticeable score changes ($\downarrow $ 0.5 for –and $\downarrow $ 0.6 for —) in the -setting compared to the other scenarios. Additive deliberation, however, manifests a drastically different pattern, showing almost no incongruence effect for -, only a 0.2 decrease for -, and even a 0.1 boost for -and -.
Therefore, the determination can be made that and -models are considerably more sensitive to incorrect visual information than -, which means the former better utilise visual clues during translation.
Interestingly, the extent of performance degradation caused by incongruence is not necessarily correlated with the congruent scores. For example, –is on par with –in congruent decoding (differing by around 0.1 ), but the former suffers only a 0.1-loss with incongruence whereas the figure for the latter is 0.4, in addition to the fact that the latter becomes significantly different after incongruent decoding. This means that some multimodal models that are sensitive to incongruence likely complement visual attention with textual attention but without getting higher-quality translation as a result.
The differences between the multimodal behaviour of additive and cascade deliberation also warrant more investigation, since the two types of deliberation are identical in their utilisation of visual features and only vary in their handling of the textual attention to the outputs of the encoder and the first pass decoder.
<<</Incongruence Analysis>>>
<<</Results & Analysis>>>
<<<Conclusions>>>
We explored a series of transformers and deliberation based models to approach cascaded multimodal speech translation as our participation in the How2-based speech translation task of IWSLT 2019. We submitted the –system, which is a canonical transformer with visual attention over the convolutional features, as our primary system with the remaining ones marked as contrastive ones. The primary system obtained a of 39.63 on the public IWSLT19 test set, whereas -, the top contrastive system on the same set, achieved 39.85. Our main conclusions are as follows: (i) the visual modality causes varying levels of translation quality damage to the transformers and additive deliberation, but boosts cascade deliberation; (ii) the multimodal transformers and cascade deliberation show performance degradation due to incongruence, but additive deliberation is not as affected; (iii) there is no strict correlation between incongruence sensitivity and translation performance.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Introduction, Abstract"
],
"type": "disordered_section"
}
|
1910.13215
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Transformer-based Cascaded Multimodal Speech Translation
<<<Abstract>>>
This paper describes the cascaded multimodal speech translation systems developed by Imperial College London for the IWSLT 2019 evaluation campaign. The architecture consists of an automatic speech recognition (ASR) system followed by a Transformer-based multimodal machine translation (MMT) system. While the ASR component is identical across the experiments, the MMT model varies in terms of the way of integrating the visual context (simple conditioning vs. attention), the type of visual features exploited (pooled, convolutional, action categories) and the underlying architecture. For the latter, we explore both the canonical transformer and its deliberation version with additive and cascade variants which differ in how they integrate the textual attention. Upon conducting extensive experiments, we found that (i) the explored visual integration schemes often harm the translation performance for the transformer and additive deliberation, but considerably improve the cascade deliberation; (ii) the transformer and cascade deliberation integrate the visual modality better than the additive deliberation, as shown by the incongruence analysis.
<<</Abstract>>>
<<<Introduction>>>
The recently introduced How2 dataset BIBREF2 has stimulated research around multimodal language understanding through the availability of 300h instructional videos, English subtitles and their Portuguese translations. For example, BIBREF3 successfully demonstrates that semantically rich action-based visual features are helpful in the context of machine translation (MT), especially in the presence of input noise that manifests itself as missing source words. Therefore, we hypothesize that a speech-to-text translation (STT) system may also benefit from the visual context, especially in the traditional cascaded framework BIBREF4, BIBREF5 where noisy automatic transcripts are obtained from an automatic speech recognition system (ASR) and further translated into the target language using a machine translation (MT) component. The dataset enables the design of such multimodal STT systems, since we have access to a bilingual corpora as well as the corresponding audio-visual stream. Hence, in this paper, we propose a cascaded multimodal STT with two components: (i) an English ASR system trained on the How2 dataset and (ii) a transformer-based BIBREF0 visually grounded MMT system.
MMT is a relatively new research topic which is interested in leveraging auxiliary modalities such as audio or vision in order to improve translation performance BIBREF6. MMT has proved effective in scenarios such as for disambiguation BIBREF7 or when the source sentences are corrupted BIBREF8. So far, MMT has mostly focused on integrating visual features into neural MT (NMT) systems using visual attention through convolutional feature maps BIBREF9, BIBREF10 or visual conditioning of encoder/decoder blocks through fully-connected features BIBREF11, BIBREF12, BIBREF13, BIBREF14.
Inspired by previous research in MMT, we explore several multimodal integration schemes using action-level video features. Specifically, we experiment with visually conditioning the encoder output and adding visual attention to the decoder. We further extend the proposed schemes to the deliberation variant BIBREF1 of the canonical transformer in two ways: additive and cascade multimodal deliberation, which are distinct in their textual attention regimes. Overall, the results show that multimodality in general leads to performance degradation for the canonical transformer and the additive deliberation variant, but can result in substantial improvements for the cascade deliberation. Our incongruence analysis BIBREF15 reveals that the transformer and cascade deliberation are more sensitive to and therefore more reliant on visual features for translation, whereas the additive deliberation is much less impacted. We also observe that incongruence sensitivity and translation performance are not necessarily correlated.
<<</Introduction>>>
<<<Methods>>>
In this section, we briefly describe the proposed multimodal speech translation system and its components.
<<<Automatic Speech Recognition>>>
The baseline ASR system that we use to obtain English transcripts is an attentive sequence-to-sequence architecture with a stacked encoder of 6 bidirectional LSTM layers BIBREF16. Each LSTM layer is followed by a tanh projection layer. The middle two LSTM layers apply a temporal subsampling BIBREF17 by skipping every other input, reducing the length of the sequence $\mathrm {X}$ from $T$ to $T/4$. All LSTM and projection layers have 320 hidden units. The forward-pass of the encoder produces the source encodings on top of which attention will be applied within the decoder. The hidden and cell states of all LSTM layers are initialized with 0. The decoder is a 2-layer stacked GRU BIBREF18, where the first GRU receives the previous hidden state of the second GRU in a transitional way. GRU layers, attention layer and embeddings have 320 hidden units. We share the input and output embeddings to reduce the number of parameters BIBREF19. At timestep $t\mathrm {=}0$, the hidden state of the first GRU is initialized with the average-pooled source encoding states.
<<</Automatic Speech Recognition>>>
<<<Deliberation-based NMT>>>
A human translator typically produces a translation draft first, and then refines it towards the final translation. The idea behind the deliberation networks BIBREF20 simulates this process by extending the conventional attentive encoder-decoder architecture BIBREF21 with a second pass refinement decoder. Specifically, the encoder first encodes a source sentence of length $N$ into a sequence of hidden states $\mathcal {H} = \lbrace h_1, h_2,\dots ,h_{N}\rbrace $ on top of which the first pass decoder applies the attention. The pre-softmax hidden states $\lbrace \hat{s}_1,\hat{s}_2,\dots ,\hat{s}_{M}\rbrace $ produced by the decoder leads to a first pass translation $\lbrace \hat{y}_1,\hat{y}_2,\dots , \hat{y}_{M}\rbrace $. The second pass decoder intervenes at this point and generates a second translation by attending separately to both $\mathcal {H}$ and the concatenated state vectors $\lbrace [\hat{s}_1;\hat{y}_1], [\hat{s}_2; \hat{y}_2],\dots ,[\hat{s}_{M}; \hat{y}_{M}]\rbrace $. Two context vectors are produced as a result, and they are joint inputs with $s_{t-1}$ (previous hidden state of ) and $y_{t-1}$ (previous output of ) to to yield $s_t$ and then $y_t$.
A transformer-based deliberation architecture is proposed by BIBREF1. It follows the same two-pass refinement process, with every second-pass decoder block attending to both the encoder output $\mathcal {H}$ and the first-pass pre-softmax hidden states $\mathcal {\hat{S}}$. However, it differs from BIBREF20 in that the actual first-pass translation $\hat{Y}$ is not used for the second-pass attention.
<<</Deliberation-based NMT>>>
<<<Multimodality>>>
<<<Visual Features>>>
We experiment with three types of video features, namely average-pooled vector representations (), convolutional layer outputs (), and Ten-Hot action category embeddings (). The features are provided by the How2 dataset using the following approach: a video is segmented into smaller parts of 16 frames each, and the segments are fed to a 3D ResNeXt-101 CNN BIBREF22, trained to recognise 400 action classes BIBREF23. The 2048-D fully-connected features are then averaged across the segments to obtain a single feature vector for the overall video.
In order to obtain the features, 16 equi-distant frames are sampled from a video, and they are then used as input to an inflated 3D ResNet-50 CNN BIBREF24 fine-tuned on the Moments in Time action video dataset. The CNN hence takes in a video and classifies it into one of 339 categories. The features, taken at the CONV$_4$ layer of the network, has a $7 \times 7 \times 2048$ dimensionality.
Higher-level semantic information can be more helpful than convolutional features. We apply the same CNN to a video as we do for features, but this time the focus is on the softmax layer output: we process the embedding matrix to keep the 10 most probable category embeddings intact while zeroing out the remaining ones. We call this representation ten-hot action category embeddings ().
<<</Visual Features>>>
<<<Integration Approaches>>>
Encoder with Additive Visual Conditioning (-) In this approach, inspired by BIBREF7, we add a projection of the visual features to each output of the vanilla transformer encoder (-). This projection is strictly linear from the 2048-D features to the 1024-D space in which the self attention hidden states reside, and the projection matrix is learned jointly with the translation model.
Decoder with Visual Attention (-) In order to accommodate attention to visual features at the decoder side and inspired by BIBREF25, we insert one layer of visual cross attention at a decoder block immediately before the fully-connected layer. We name the transformer decoder with such an extra layer as –, where this layer is immediately after the textual attention to the encoder output. Specifically, we experiment with attention to , and features separately. The visual attention is distributed across the 49 video regions in , the 339 action category word embeddings in , or the 32 rows in where we reshape the 2048-D vector into a $32 \times 64$ matrix.
<<</Integration Approaches>>>
<<<Multimodal Transformers>>>
The vanilla text-only transformer (-) is used as a baseline, and we design two variants: with additive visual conditioning (-) and with attention to visual features (-). A -features a -and a vanilla transformer decoder (-), therefore utilising visual information only at the encoder side. In contrast, a -is configured with a -and a –, exploiting visual cues only at the decoder. Figure FIGREF7 summarises the two approaches.
<<</Multimodal Transformers>>>
<<<Multimodal Deliberation>>>
Our multimodal deliberation models differ from each other in two ways: whether to use additive () BIBREF7 or cascade () textual deliberation to integrate the textual attention to the original input and to the first pass, and whether to employ visual attention (-) or additive visual conditioning (-) to integrate the visual features into the textual MT model. Figures FIGREF9 and FIGREF10 show the configurations of our additive and cascade deliberation models, respectively, each also showing the connections necessary for -and -.
Additive () & Cascade () Textual Deliberation
In an additive-deliberation second-pass decoder (–) block, the first layer is still self-attention, whereas the second layer is the addition of two separate attention sub-layers. The first sub-layer attends to the encoder output in the same way -does, while the attention of the second sub-layer is distributed across the concatenated first pass outputs and hidden states. The input to both sub-layers is the output of the self-attention layer, and the outputs of the sub-layers are summed as the final output and then (with a residual connection) fed to the visual attention layer if the decoder is multimodal or to the fully connected layer otherwise.
For the cascade version, the only difference is that, instead of two sub-layers, we have two separate, successive layers with the same functionalities.
It is worth mentioning that we introduce the attention to the first pass only at the initial three decoder blocks out of the total six of the second pass decoder (-), following BIBREF7.
Additive Visual Conditioning (-) & Visual Attention (-)
-and -are simply applying -and -respectively to a deliberation model, therefore more details have been introduced in Section SECREF5.
For -, similar to in -, we add a projection of the visual features to the output of -, and use -as the first pass decoder and either additive or cascade deliberation as the -.
For -, in a similar vein as -, the encoder in this setting is simply -and the first pass decoder is just -, but this time -is responsible for attending to the first pass output as well as the visual features. For both additive and cascade deliberation, a visual attention layer is inserted immediately before the fully-connected layer, so that the penultimate layer of a decoder block now attends to visual information.
<<</Multimodal Deliberation>>>
<<</Multimodality>>>
<<</Methods>>>
<<<Experiments>>>
<<<Dataset>>>
We stick to the default training/validation/test splits and the pre-extracted speech features for the How2 dataset, as provided by the organizers. As for the pre-processing, we lowercase the sentences and then tokenise them using Moses BIBREF26. We then apply subword segmentation BIBREF27 by learning separate English and Portuguese models with 20,000 merge operations each. The English corpus used when training the subword model consists of both the ground-truth video subtitles and the noisy transcripts produced by the underlying ASR system. We do not share vocabularies between the source and target domains. Finally for the post-processing step, we merge the subword tokens, apply recasing and detokenisation. The recasing model is a standard Moses baseline trained again on the parallel How2 corpus.
The baseline ASR system is trained on the How2 dataset as well. This system is then used to obtain noisy transcripts for the whole dataset, using beam-search with beam size of 10. The pre-processing pipeline for the ASR is different from the MT pipeline in the sense that the punctuations are removed and the subword segmentation is performed using SentencePiece BIBREF28 with a vocabulary size of 5,000. The test-set performance of this ASR is around 19% WER.
<<</Dataset>>>
<<<Training>>>
We train our transformer and deliberation models until convergence largely with transformer_big hyperparameters: 16 attention heads, 1024-D hidden states and a dropout of 0.1. During inference, we apply beam-search with beam size of 10. For deliberation, we first train the underlying transformer model until convergence, and use its weights to initialise the encoder and the first pass decoder. After freezing those weights, we train -until convergence. The reason for the partial freezing is that our preliminary experiments showed that it enabled better performance compared to updating the whole model. Following BIBREF20, we obtain 10-best samples from the first pass with beam-search for source augmentation during the training of -.
We train all the models on an Nvidia RTX 2080Ti with a batch size of 1024, a base learning rate of 0.02 with 8,000 warm-up steps for the Adam BIBREF29 optimiser, and a patience of 10 epochs for early stopping based on approx-BLEU () for the transformers and 3 epochs for the deliberation models. After the training finishes, we evaluate all the checkpoints on the validation set and compute the real BIBREF30 scores, based on which we select the best model for inference on the test set. The transformer and the deliberation models are based upon the library BIBREF31 (v1.3.0 RC1) as well as the vanilla transformer-based deliberation BIBREF20 and their multimodal variants BIBREF7.
<<</Training>>>
<<</Experiments>>>
<<<Results & Analysis>>>
<<<Quantitative Results>>>
We report tokenised results obtained using the multeval toolkit BIBREF32. We focus on single system performance and thus, do not perform any ensembling or checkpoint averaging.
The scores of the models are shown in Table TABREF17. Evident from the table is that the best models overall are -and –with a score of 39.8, and the other multimodal transformers have slightly worse performance, showing score drops around 0.1. Also, none of the multimodal transformer systems are significantly different from the baseline, which is a sign of the limited extent to which visual features affect the output.
For additive deliberation (-), the performance variation is considerably larger: -and take the lead with 37.6 , but the next best system (-) plunges to 37.2. The other two (-& -) also have noticeably worse results (36.0 and 37.0). Overall, however, -is still similar to the transformers in that the baseline generally yields higher-quality translations.
Cascade deliberation, on the other hand, is different in that its text-only baseline is outperformed by most of its multimodal counterparts. Multimodality enables boosts as large as around 1 point in the cases of -and -, both of which achieve about 37.4 and are significantly different from the baseline.
Another observation is that the deliberation models as a whole lead to worse performance than the canonical transformers, with deterioration ranging from 2.3 (across -variants) to 3.5 (across -systems), which defies the findings of BIBREF7. We leave this to future investigations.
<<</Quantitative Results>>>
<<<Incongruence Analysis>>>
To further probe the effect of multimodality, we follow the incongruent decoding approach BIBREF15, where our multimodal models are fed with mismatched visual features. The general assumption is that a model will have learned to exploit visual information to help with its translation, if it shows substantial performance degradation when given wrong visual features. The results are reported in Table TABREF19.
Overall, there are considerable parallels between the transformers and the cascade deliberation models in terms of the incongruence effect, such as universal performance deterioration (ranging from 0.1 to 0.6 ) and more noticeable score changes ($\downarrow $ 0.5 for –and $\downarrow $ 0.6 for —) in the -setting compared to the other scenarios. Additive deliberation, however, manifests a drastically different pattern, showing almost no incongruence effect for -, only a 0.2 decrease for -, and even a 0.1 boost for -and -.
Therefore, the determination can be made that and -models are considerably more sensitive to incorrect visual information than -, which means the former better utilise visual clues during translation.
Interestingly, the extent of performance degradation caused by incongruence is not necessarily correlated with the congruent scores. For example, –is on par with –in congruent decoding (differing by around 0.1 ), but the former suffers only a 0.1-loss with incongruence whereas the figure for the latter is 0.4, in addition to the fact that the latter becomes significantly different after incongruent decoding. This means that some multimodal models that are sensitive to incongruence likely complement visual attention with textual attention but without getting higher-quality translation as a result.
The differences between the multimodal behaviour of additive and cascade deliberation also warrant more investigation, since the two types of deliberation are identical in their utilisation of visual features and only vary in their handling of the textual attention to the outputs of the encoder and the first pass decoder.
<<</Incongruence Analysis>>>
<<</Results & Analysis>>>
<<<Conclusions>>>
We explored a series of transformers and deliberation based models to approach cascaded multimodal speech translation as our participation in the How2-based speech translation task of IWSLT 2019. We submitted the –system, which is a canonical transformer with visual attention over the convolutional features, as our primary system with the remaining ones marked as contrastive ones. The primary system obtained a of 39.63 on the public IWSLT19 test set, whereas -, the top contrastive system on the same set, achieved 39.85. Our main conclusions are as follows: (i) the visual modality causes varying levels of translation quality damage to the transformers and additive deliberation, but boosts cascade deliberation; (ii) the multimodal transformers and cascade deliberation show performance degradation due to incongruence, but additive deliberation is not as affected; (iii) there is no strict correlation between incongruence sensitivity and translation performance.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Introduction, Results & Analysis"
],
"type": "disordered_section"
}
|
1910.13215
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Transformer-based Cascaded Multimodal Speech Translation
<<<Abstract>>>
This paper describes the cascaded multimodal speech translation systems developed by Imperial College London for the IWSLT 2019 evaluation campaign. The architecture consists of an automatic speech recognition (ASR) system followed by a Transformer-based multimodal machine translation (MMT) system. While the ASR component is identical across the experiments, the MMT model varies in terms of the way of integrating the visual context (simple conditioning vs. attention), the type of visual features exploited (pooled, convolutional, action categories) and the underlying architecture. For the latter, we explore both the canonical transformer and its deliberation version with additive and cascade variants which differ in how they integrate the textual attention. Upon conducting extensive experiments, we found that (i) the explored visual integration schemes often harm the translation performance for the transformer and additive deliberation, but considerably improve the cascade deliberation; (ii) the transformer and cascade deliberation integrate the visual modality better than the additive deliberation, as shown by the incongruence analysis.
<<</Abstract>>>
<<<Introduction>>>
The recently introduced How2 dataset BIBREF2 has stimulated research around multimodal language understanding through the availability of 300h instructional videos, English subtitles and their Portuguese translations. For example, BIBREF3 successfully demonstrates that semantically rich action-based visual features are helpful in the context of machine translation (MT), especially in the presence of input noise that manifests itself as missing source words. Therefore, we hypothesize that a speech-to-text translation (STT) system may also benefit from the visual context, especially in the traditional cascaded framework BIBREF4, BIBREF5 where noisy automatic transcripts are obtained from an automatic speech recognition system (ASR) and further translated into the target language using a machine translation (MT) component. The dataset enables the design of such multimodal STT systems, since we have access to a bilingual corpora as well as the corresponding audio-visual stream. Hence, in this paper, we propose a cascaded multimodal STT with two components: (i) an English ASR system trained on the How2 dataset and (ii) a transformer-based BIBREF0 visually grounded MMT system.
MMT is a relatively new research topic which is interested in leveraging auxiliary modalities such as audio or vision in order to improve translation performance BIBREF6. MMT has proved effective in scenarios such as for disambiguation BIBREF7 or when the source sentences are corrupted BIBREF8. So far, MMT has mostly focused on integrating visual features into neural MT (NMT) systems using visual attention through convolutional feature maps BIBREF9, BIBREF10 or visual conditioning of encoder/decoder blocks through fully-connected features BIBREF11, BIBREF12, BIBREF13, BIBREF14.
Inspired by previous research in MMT, we explore several multimodal integration schemes using action-level video features. Specifically, we experiment with visually conditioning the encoder output and adding visual attention to the decoder. We further extend the proposed schemes to the deliberation variant BIBREF1 of the canonical transformer in two ways: additive and cascade multimodal deliberation, which are distinct in their textual attention regimes. Overall, the results show that multimodality in general leads to performance degradation for the canonical transformer and the additive deliberation variant, but can result in substantial improvements for the cascade deliberation. Our incongruence analysis BIBREF15 reveals that the transformer and cascade deliberation are more sensitive to and therefore more reliant on visual features for translation, whereas the additive deliberation is much less impacted. We also observe that incongruence sensitivity and translation performance are not necessarily correlated.
<<</Introduction>>>
<<<Methods>>>
In this section, we briefly describe the proposed multimodal speech translation system and its components.
<<<Automatic Speech Recognition>>>
The baseline ASR system that we use to obtain English transcripts is an attentive sequence-to-sequence architecture with a stacked encoder of 6 bidirectional LSTM layers BIBREF16. Each LSTM layer is followed by a tanh projection layer. The middle two LSTM layers apply a temporal subsampling BIBREF17 by skipping every other input, reducing the length of the sequence $\mathrm {X}$ from $T$ to $T/4$. All LSTM and projection layers have 320 hidden units. The forward-pass of the encoder produces the source encodings on top of which attention will be applied within the decoder. The hidden and cell states of all LSTM layers are initialized with 0. The decoder is a 2-layer stacked GRU BIBREF18, where the first GRU receives the previous hidden state of the second GRU in a transitional way. GRU layers, attention layer and embeddings have 320 hidden units. We share the input and output embeddings to reduce the number of parameters BIBREF19. At timestep $t\mathrm {=}0$, the hidden state of the first GRU is initialized with the average-pooled source encoding states.
<<</Automatic Speech Recognition>>>
<<<Deliberation-based NMT>>>
A human translator typically produces a translation draft first, and then refines it towards the final translation. The idea behind the deliberation networks BIBREF20 simulates this process by extending the conventional attentive encoder-decoder architecture BIBREF21 with a second pass refinement decoder. Specifically, the encoder first encodes a source sentence of length $N$ into a sequence of hidden states $\mathcal {H} = \lbrace h_1, h_2,\dots ,h_{N}\rbrace $ on top of which the first pass decoder applies the attention. The pre-softmax hidden states $\lbrace \hat{s}_1,\hat{s}_2,\dots ,\hat{s}_{M}\rbrace $ produced by the decoder leads to a first pass translation $\lbrace \hat{y}_1,\hat{y}_2,\dots , \hat{y}_{M}\rbrace $. The second pass decoder intervenes at this point and generates a second translation by attending separately to both $\mathcal {H}$ and the concatenated state vectors $\lbrace [\hat{s}_1;\hat{y}_1], [\hat{s}_2; \hat{y}_2],\dots ,[\hat{s}_{M}; \hat{y}_{M}]\rbrace $. Two context vectors are produced as a result, and they are joint inputs with $s_{t-1}$ (previous hidden state of ) and $y_{t-1}$ (previous output of ) to to yield $s_t$ and then $y_t$.
A transformer-based deliberation architecture is proposed by BIBREF1. It follows the same two-pass refinement process, with every second-pass decoder block attending to both the encoder output $\mathcal {H}$ and the first-pass pre-softmax hidden states $\mathcal {\hat{S}}$. However, it differs from BIBREF20 in that the actual first-pass translation $\hat{Y}$ is not used for the second-pass attention.
<<</Deliberation-based NMT>>>
<<<Multimodality>>>
<<<Visual Features>>>
We experiment with three types of video features, namely average-pooled vector representations (), convolutional layer outputs (), and Ten-Hot action category embeddings (). The features are provided by the How2 dataset using the following approach: a video is segmented into smaller parts of 16 frames each, and the segments are fed to a 3D ResNeXt-101 CNN BIBREF22, trained to recognise 400 action classes BIBREF23. The 2048-D fully-connected features are then averaged across the segments to obtain a single feature vector for the overall video.
In order to obtain the features, 16 equi-distant frames are sampled from a video, and they are then used as input to an inflated 3D ResNet-50 CNN BIBREF24 fine-tuned on the Moments in Time action video dataset. The CNN hence takes in a video and classifies it into one of 339 categories. The features, taken at the CONV$_4$ layer of the network, has a $7 \times 7 \times 2048$ dimensionality.
Higher-level semantic information can be more helpful than convolutional features. We apply the same CNN to a video as we do for features, but this time the focus is on the softmax layer output: we process the embedding matrix to keep the 10 most probable category embeddings intact while zeroing out the remaining ones. We call this representation ten-hot action category embeddings ().
<<</Visual Features>>>
<<<Integration Approaches>>>
Encoder with Additive Visual Conditioning (-) In this approach, inspired by BIBREF7, we add a projection of the visual features to each output of the vanilla transformer encoder (-). This projection is strictly linear from the 2048-D features to the 1024-D space in which the self attention hidden states reside, and the projection matrix is learned jointly with the translation model.
Decoder with Visual Attention (-) In order to accommodate attention to visual features at the decoder side and inspired by BIBREF25, we insert one layer of visual cross attention at a decoder block immediately before the fully-connected layer. We name the transformer decoder with such an extra layer as –, where this layer is immediately after the textual attention to the encoder output. Specifically, we experiment with attention to , and features separately. The visual attention is distributed across the 49 video regions in , the 339 action category word embeddings in , or the 32 rows in where we reshape the 2048-D vector into a $32 \times 64$ matrix.
<<</Integration Approaches>>>
<<<Multimodal Transformers>>>
The vanilla text-only transformer (-) is used as a baseline, and we design two variants: with additive visual conditioning (-) and with attention to visual features (-). A -features a -and a vanilla transformer decoder (-), therefore utilising visual information only at the encoder side. In contrast, a -is configured with a -and a –, exploiting visual cues only at the decoder. Figure FIGREF7 summarises the two approaches.
<<</Multimodal Transformers>>>
<<<Multimodal Deliberation>>>
Our multimodal deliberation models differ from each other in two ways: whether to use additive () BIBREF7 or cascade () textual deliberation to integrate the textual attention to the original input and to the first pass, and whether to employ visual attention (-) or additive visual conditioning (-) to integrate the visual features into the textual MT model. Figures FIGREF9 and FIGREF10 show the configurations of our additive and cascade deliberation models, respectively, each also showing the connections necessary for -and -.
Additive () & Cascade () Textual Deliberation
In an additive-deliberation second-pass decoder (–) block, the first layer is still self-attention, whereas the second layer is the addition of two separate attention sub-layers. The first sub-layer attends to the encoder output in the same way -does, while the attention of the second sub-layer is distributed across the concatenated first pass outputs and hidden states. The input to both sub-layers is the output of the self-attention layer, and the outputs of the sub-layers are summed as the final output and then (with a residual connection) fed to the visual attention layer if the decoder is multimodal or to the fully connected layer otherwise.
For the cascade version, the only difference is that, instead of two sub-layers, we have two separate, successive layers with the same functionalities.
It is worth mentioning that we introduce the attention to the first pass only at the initial three decoder blocks out of the total six of the second pass decoder (-), following BIBREF7.
Additive Visual Conditioning (-) & Visual Attention (-)
-and -are simply applying -and -respectively to a deliberation model, therefore more details have been introduced in Section SECREF5.
For -, similar to in -, we add a projection of the visual features to the output of -, and use -as the first pass decoder and either additive or cascade deliberation as the -.
For -, in a similar vein as -, the encoder in this setting is simply -and the first pass decoder is just -, but this time -is responsible for attending to the first pass output as well as the visual features. For both additive and cascade deliberation, a visual attention layer is inserted immediately before the fully-connected layer, so that the penultimate layer of a decoder block now attends to visual information.
<<</Multimodal Deliberation>>>
<<</Multimodality>>>
<<</Methods>>>
<<<Experiments>>>
<<<Dataset>>>
We stick to the default training/validation/test splits and the pre-extracted speech features for the How2 dataset, as provided by the organizers. As for the pre-processing, we lowercase the sentences and then tokenise them using Moses BIBREF26. We then apply subword segmentation BIBREF27 by learning separate English and Portuguese models with 20,000 merge operations each. The English corpus used when training the subword model consists of both the ground-truth video subtitles and the noisy transcripts produced by the underlying ASR system. We do not share vocabularies between the source and target domains. Finally for the post-processing step, we merge the subword tokens, apply recasing and detokenisation. The recasing model is a standard Moses baseline trained again on the parallel How2 corpus.
The baseline ASR system is trained on the How2 dataset as well. This system is then used to obtain noisy transcripts for the whole dataset, using beam-search with beam size of 10. The pre-processing pipeline for the ASR is different from the MT pipeline in the sense that the punctuations are removed and the subword segmentation is performed using SentencePiece BIBREF28 with a vocabulary size of 5,000. The test-set performance of this ASR is around 19% WER.
<<</Dataset>>>
<<<Training>>>
We train our transformer and deliberation models until convergence largely with transformer_big hyperparameters: 16 attention heads, 1024-D hidden states and a dropout of 0.1. During inference, we apply beam-search with beam size of 10. For deliberation, we first train the underlying transformer model until convergence, and use its weights to initialise the encoder and the first pass decoder. After freezing those weights, we train -until convergence. The reason for the partial freezing is that our preliminary experiments showed that it enabled better performance compared to updating the whole model. Following BIBREF20, we obtain 10-best samples from the first pass with beam-search for source augmentation during the training of -.
We train all the models on an Nvidia RTX 2080Ti with a batch size of 1024, a base learning rate of 0.02 with 8,000 warm-up steps for the Adam BIBREF29 optimiser, and a patience of 10 epochs for early stopping based on approx-BLEU () for the transformers and 3 epochs for the deliberation models. After the training finishes, we evaluate all the checkpoints on the validation set and compute the real BIBREF30 scores, based on which we select the best model for inference on the test set. The transformer and the deliberation models are based upon the library BIBREF31 (v1.3.0 RC1) as well as the vanilla transformer-based deliberation BIBREF20 and their multimodal variants BIBREF7.
<<</Training>>>
<<</Experiments>>>
<<<Results & Analysis>>>
<<<Quantitative Results>>>
We report tokenised results obtained using the multeval toolkit BIBREF32. We focus on single system performance and thus, do not perform any ensembling or checkpoint averaging.
The scores of the models are shown in Table TABREF17. Evident from the table is that the best models overall are -and –with a score of 39.8, and the other multimodal transformers have slightly worse performance, showing score drops around 0.1. Also, none of the multimodal transformer systems are significantly different from the baseline, which is a sign of the limited extent to which visual features affect the output.
For additive deliberation (-), the performance variation is considerably larger: -and take the lead with 37.6 , but the next best system (-) plunges to 37.2. The other two (-& -) also have noticeably worse results (36.0 and 37.0). Overall, however, -is still similar to the transformers in that the baseline generally yields higher-quality translations.
Cascade deliberation, on the other hand, is different in that its text-only baseline is outperformed by most of its multimodal counterparts. Multimodality enables boosts as large as around 1 point in the cases of -and -, both of which achieve about 37.4 and are significantly different from the baseline.
Another observation is that the deliberation models as a whole lead to worse performance than the canonical transformers, with deterioration ranging from 2.3 (across -variants) to 3.5 (across -systems), which defies the findings of BIBREF7. We leave this to future investigations.
<<</Quantitative Results>>>
<<<Incongruence Analysis>>>
To further probe the effect of multimodality, we follow the incongruent decoding approach BIBREF15, where our multimodal models are fed with mismatched visual features. The general assumption is that a model will have learned to exploit visual information to help with its translation, if it shows substantial performance degradation when given wrong visual features. The results are reported in Table TABREF19.
Overall, there are considerable parallels between the transformers and the cascade deliberation models in terms of the incongruence effect, such as universal performance deterioration (ranging from 0.1 to 0.6 ) and more noticeable score changes ($\downarrow $ 0.5 for –and $\downarrow $ 0.6 for —) in the -setting compared to the other scenarios. Additive deliberation, however, manifests a drastically different pattern, showing almost no incongruence effect for -, only a 0.2 decrease for -, and even a 0.1 boost for -and -.
Therefore, the determination can be made that and -models are considerably more sensitive to incorrect visual information than -, which means the former better utilise visual clues during translation.
Interestingly, the extent of performance degradation caused by incongruence is not necessarily correlated with the congruent scores. For example, –is on par with –in congruent decoding (differing by around 0.1 ), but the former suffers only a 0.1-loss with incongruence whereas the figure for the latter is 0.4, in addition to the fact that the latter becomes significantly different after incongruent decoding. This means that some multimodal models that are sensitive to incongruence likely complement visual attention with textual attention but without getting higher-quality translation as a result.
The differences between the multimodal behaviour of additive and cascade deliberation also warrant more investigation, since the two types of deliberation are identical in their utilisation of visual features and only vary in their handling of the textual attention to the outputs of the encoder and the first pass decoder.
<<</Incongruence Analysis>>>
<<</Results & Analysis>>>
<<<Conclusions>>>
We explored a series of transformers and deliberation based models to approach cascaded multimodal speech translation as our participation in the How2-based speech translation task of IWSLT 2019. We submitted the –system, which is a canonical transformer with visual attention over the convolutional features, as our primary system with the remaining ones marked as contrastive ones. The primary system obtained a of 39.63 on the public IWSLT19 test set, whereas -, the top contrastive system on the same set, achieved 39.85. Our main conclusions are as follows: (i) the visual modality causes varying levels of translation quality damage to the transformers and additive deliberation, but boosts cascade deliberation; (ii) the multimodal transformers and cascade deliberation show performance degradation due to incongruence, but additive deliberation is not as affected; (iii) there is no strict correlation between incongruence sensitivity and translation performance.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Experiments, Introduction"
],
"type": "disordered_section"
}
|
1912.00159
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Automatic Creation of Text Corpora for Low-Resource Languages from the Internet: The Case of Swiss German
<<<Abstract>>>
This paper presents SwissCrawl, the largest Swiss German text corpus to date. Composed of more than half a million sentences, it was generated using a customized web scraping tool that could be applied to other low-resource languages as well. The approach demonstrates how freely available web pages can be used to construct comprehensive text corpora, which are of fundamental importance for natural language processing. In an experimental evaluation, we show that using the new corpus leads to significant improvements for the task of language modeling. To capture new content, our approach will run continuously to keep increasing the corpus over time.
<<</Abstract>>>
<<<Introduction>>>
Swiss German (“Schwyzerdütsch” or “Schwiizertüütsch”, abbreviated “GSW”) is the name of a large continuum of dialects attached to the Germanic language tree spoken by more than 60% of the Swiss population BIBREF0. Used every day from colloquial conversations to business meetings, Swiss German in its written form has become more and more popular in recent years with the rise of blogs, messaging applications and social media. However, the variability of the written form is rather large as orthography is more based on local pronunciations and emerging conventions than on a unique grammar.
Even though Swiss German is widely spread in Switzerland, there are still few natural language processing (NLP) corpora, studies or tools available BIBREF1. This lack of resources may be explained by the small pool of speakers (less than one percent of the world population), but also the many intrinsic difficulties of Swiss German, including the lack of official writing rules, the high variability across different dialects, and the informal context in which texts are commonly written. Furthermore, there is no official top-level domain (TLD) for Swiss German on the Internet, which renders the automatic collection of Swiss German texts more difficult.
To automate the treatment of Swiss German and foster its adoption in online services such as automatic speech recognition (ASR), we gathered the largest corpus of written Swiss German to date by crawling the web using a customized tool. We highlight the difficulties for finding Swiss German on the web and demonstrate in an experimental evaluation how our text corpus can be used to significantly improve an important NLP task that is a fundamental part of the ASR process: language modeling.
<<</Introduction>>>
<<<Related Work>>>
Few GSW corpora already exists. Although they are very valuable for research on specific aspects of the Swiss German language, they are either highly specialized BIBREF2 BIBREF3 BIBREF4, rather small BIBREF1 (7,305 sentences), or do not offer full sentences BIBREF5.
To our knowledge, the only comprehensive written Swiss German corpus to date comes from the Leipzig corpora collection initiative BIBREF6 offering corpora for more than 136 languages. The Swiss German data has two sources: the Alemannic Wikipedia and web crawls on the .ch domain in 2016 and 2017, leading to a total of 175,399 unique sentences. While the Leipzig Web corpus for Swiss German is of considerable size, we believe this number does not reflect the actual amount of GSW available on the Internet. Furthermore, the enforced sentence structures do not represent the way Swiss German speakers write online.
In this paper, we thus aim at augmenting the Leipzig Web corpus by looking further than the .ch domain and by using a suite of tools specifically designed for retrieving Swiss German.
The idea of using the web as a vast source of linguistic data has been around for decades BIBREF7 and many authors have already addressed its importance for low-resources languages BIBREF8. A common technique is to send queries made of mid-frequency $n$-grams to a search engine to gather bootstrap URLs, which initiate a crawl using a breadth-first strategy in order to gather meaningful information, such as documents or words BIBREF9, BIBREF5.
Existing tools and studies, however, have requirements that are inadequate for the case of Swiss German. For example, GSW is not a language known to search engines BIBREF9, does not have specific TLDs BIBREF10, and lacks good language identification models. Also, GSW documents are too rare to use bootstrapping techniques BIBREF8. Finally, as GSW is scarce and mostly found in comments sections or as part of multilingual web pages (e.g. High German), we cannot afford to “privilege precision over recall” BIBREF11 by focusing on the main content of a page.
As a consequence, our method is based on known techniques that are adapted to deal with those peculiarities. Furthermore, it was designed for having a human in the loop. Its iterative nature makes it possible to refine each step of the tool chain as our knowledge of GSW improves.
<<</Related Work>>>
<<<Proposed System>>>
The two main components of our proposed system are shown in Figure FIGREF1: a seeder that gathers potentially interesting URLs using a Search Engine and a crawler that extracts GSW from web pages, linked together by a MongoDB database. The system is implemented in Python 3, with the full code available on GitHub. Due to the exploratory nature of the task, the tool chain is executed in an iterative manner, allowing us to control and potentially improve the process punctually.
<<<Language Identification>>>
Language identification (LID) is a central component of the pipeline, as it has a strong influence on the final result. In addition, readily available tools are not performing at a satisfying level. For these reasons we created a tailor-made LID system for this situation.
LID has been extensively studied over the past decades BIBREF12 and has achieved impressive results on long monolingual documents in major languages such as English. However, the task becomes more challenging when the pool of training data is small and of high variability, and when the unit of identification is only a sentence.
Free pretrained LIDs supporting GSW such as FastText BIBREF13 are trained on the Alemannic Wikipedia, which encompasses not only GSW, but also German dialects such as Badisch, Elsässisch, Schwäbisch and Vorarlbergisch. This makes the precision of the model insufficient for our purposes.
The dataset used to build our Swiss German LID is based on the Leipzig text corpora BIBREF6, mostly focusing on the texts gathered from the Internet. In preliminary experiments, we have chosen eight language classes shown in Table TABREF4, which give precedence to languages closely related to Swiss German in their structure. In this Table, GSW_LIKE refers to a combination of dialects that are similar to Swiss German but for which we did not have sufficient resources to model classes on their own.
A total of 535,000 sentences are considered for LID with an equal distribution over the eight classes. The 66,684 GSW sentences originate from the Leipzig web corpus 2017 and have been refined during preliminary experiments to exclude obvious non-GSW contents. We use 75% of the data for training, 10% for optimizing system parameters, and 15% for testing the final performance.
Using a pretrained German BERT model BIBREF14 and fine-tuning it on our corpus, we obtain a high LID accuracy of 99.58%. GSW is most confused with German (0.04%) and GSW_LIKE (0.04%). We have also validated the LID system on SMS sentences BIBREF2, where it proves robust for sentences as short as five words.
<<</Language Identification>>>
<<<The Seeder>>>
Query generation has already been extensively studied BIBREF15, BIBREF9. In the case of Swiss German, we tested three different approaches: (a) most frequent trigrams, (b) selection of 2 to 7 random words weighted by their frequency distribution and (c) human-generated queries.
When comparing the corpora generated by 100 seeds of each type, we did not observe significant differences in terms of quantity or quality for the three seeding strategies. On a positive side, $50\%$ of the sentences were different from one seed strategy to the other, suggesting for an approach where strategies are mixed. However, we also observed that (a) tends to yield more similar queries over time and (c) is too time-consuming for practical use.
Considering these observations, we privileged the following approach:
Start with a list of sentences, either from a bootstrap dataset or from sentences from previous crawls using one single sentence per unique URL;
Compute the frequency over the vocabulary, normalizing words to lower case and discarding those having non-alphabetic characters;
Filter out words appearing only once or present in German or English vocabularies;
Generate query seeds by sampling 3 words with a probability following their frequency distribution;
Exclude seeds with more than two single-letter words or having a GSW probability below 95% (see Section SECREF3).
Initial sentences come from the Leipzig web corpus 2017, filtered by means of the LID described in Section SECREF3
Each seed is submitted to startpage.com, a Google Search proxy augmented with privacy features. To ensure GSW is not auto-corrected to High German, each word is first surrounded by double quotes. The first 20 new URLs, i.e. URLs that were never seen before, are saved for further crawling.
<<</The Seeder>>>
<<<The Crawler>>>
The crawler starts with a list of URLs and metadata taken either from a file or from the MongoDB instance, and are added to a task queue with a depth of 0. As illustrated in Figure FIGREF1, each task consists of a series of steps that will download the page content, extract well-formed GSW sentences and add links found on the page to the task queue. At different stages of this pipeline, a decider can intervene in order to stop the processing early. A crawl may also be limited to a given depth, usually set to 3.
<<<Scrape>>>
The raw HTML content is fetched and converted to UTF-8 using a mixture of requests and BeautifulSoup. Boilerplate removal such as navigation and tables uses jusText BIBREF16, but ignores stop words filtering as such a list is not available for GSW. The output is a UTF-8 text containing newlines.
<<</Scrape>>>
<<<Normalize>>>
This stage tries to fix remaining encoding issues using ftfy BIBREF17 and to remove unicode emojis. Another important task is to normalize the unicode code points used for accents, spaces, dashes, quotes etc., and strip any invisible characters. To further improve the usability of the corpus and to simplify tokenization, we also try to enforce one single convention for spaces around quotes and colons, e.g. colons after closing quote, no space inside quotes.
<<</Normalize>>>
<<<Split>>>
To split text into sentences, we implemented Moses' split-sentences.perl in Python and changed it in three main ways: existing newlines are preserved, colons and semi-colons are considered segmentation hints and sentences are not required to start with an uppercase. The latter is especially important as GSW is mostly found in comments where people tend to write fast and without proper casing/punctuation. The list of non-breaking prefixes used is a concatenation of the English and German prefixes found in Moses with few additions.
<<</Split>>>
<<<Filter>>>
Non- or bad- sentences are identified based on a list of $20+$ rules that normal sentences should obey. Most rules are specified in the form of regular expression patterns and boundaries of acceptable occurrences, few compare the ratio of occurrence between two patterns. Examples of such rules in natural language are: “no more than one hashtag”, “no word with more than 30 characters”, “the ratio capitalized/lowercase words is below 1.5”.
<<</Filter>>>
<<<Language ID>>>
Using the LID described in Section SECREF3, sentences with a GSW probability of less than 92% are discarded. This threshold is low on purpose in order to favor recall over precision.
<<</Language ID>>>
<<<Link filter>>>
This component is used to exclude or transform outgoing links found in a page based on duplicates, URL composition, but also specific rules for big social media sites or known blogs. Examples are the exclusion of unrelated national TLDs (.af, .nl, ...) and known media extensions (.pdf, .jpeg, etc.), the stripping of session IDs in URL parameters, and the homogenization of subdomains for sites such as Twitter. Note that filtering is based only on the URL and therefore does not handle redirects or URLs pointing to the same page. This leads to extra work during the crawling, but keeps the whole system simple.
<<</Link filter>>>
<<<Decide>>>
A decider has three main decisions to take. First, based on the metadata associated with an URL, should it be visited? In practice, we visit only new URLs, but the tool is designed in a way such that a recrawl is possible if the page is detected as highly dynamic. The second decision arises at the end of the processing, where the page can be either saved or blacklisted. To favor recall, we currently keep any URL with at least one GSW sentence. Finally, the decider can choose to visit the outgoing links or not. After some trials, we found that following links from pages with more than two new GSW sentences is a reasonable choice, as pages with less sentences are often quotes or false positives.
<<</Decide>>>
<<<Duplicates>>>
During the crawl, the uniqueness of sentences and URLs considers only exact matches. However, when exporting the results, near-duplicate sentences are removed by first stripping any non-letter (including spaces) and making a lowercase comparison. We tried other near-duplicate approaches, but found that they also discarded meaningful writing variations.
<<</Duplicates>>>
<<</The Crawler>>>
<<</Proposed System>>>
<<<State of the Swiss German Web>>>
Table TABREF14 shows the results of running the system three times using 100 seeds on a virtual machine with 5 CPU cores and no GPUs. As expected, the first iteration yields the most new sentences. Unfortunately, the number of newly discovered hosts and sentences decreases exponentially as the system runs, dropping to 20K sentences on the third iteration. This result emphasizes the fact that the amount of GSW on the web is very limited.
The third iteration took also significantly longer, which highlights the difficulties of crawling the web. In this iteration, some URLs had as much as 12 thousand outgoing links that we had to visit before discarding. Another problem arises on web sites where query parameters are used in URLs to encode cookie information and on which duplicate hypotheses cannot be solved unless visiting the links.
On each new search engine query, we go further down the list of results as the top ones may already be known. As such, the percentage of pertinent URLs retrieved (% good, see decider description in Section SECREF13) slowly decreases at each iteration. It is however still above 55% of the retrieved URLs on the third run, indicating a good quality of the seeds.
<<</State of the Swiss German Web>>>
<<<The SwissCrawl Text Corpus>>>
Using the proposed system, we were able to gather more than half a million unique GSW sentences from around the web. The crawling took place between September and November 2019. The corpus is available for download in the form of a CSV file with four columns: text, url, crawl_proba, date, with crawl_proba being the GSW probability returned by the LID system (see Section SECREF3).
<<<Contents>>>
The corpus is composed of 562,524 sentences from 62K URLs among 3,472 domains. The top ten domains (see Table TABREF18) are forums and social media sites. They account for 46% of the whole corpus.
In general, we consider a GSW probability of $\ge {99}\%$, to be indeed Swiss German with high confidence. This represents more than 89% of the corpus (500K) (see Figure FIGREF19). The sentence length varies between 25 and 998 characters with a mean of $92\pm 55$ and a median of 77 (see Figure FIGREF20), while the number of words lies between 4 and 222, with a mean of $16\pm 10$ and a median of 14. This highlights a common pattern in Swiss German writings: used mostly in informal contexts, sentences tend to be short and to include many symbols, such as emojis or repetitive punctuation.
Very long sentences are usually lyrics that lack proper punctuation and thus could not be segmented properly. We however decided to keep them in the final corpus, as they could be useful in specific tasks and are easy to filter out otherwise.
Besides the normalization described in SECREF13, no cleaning nor post-processing is applied to the sentences. This is a deliberate choice to avoid losing any information that could be pertinent for a given task or for further selection. As a result, the mean letter density is 80% and only 61% of sentences both start with an uppercase letter and end with a common punctuation mark (.!?).
Finally, although we performed no human validation per se, we actively monitored the crawling process to spot problematic domains early. This allowed to blacklist some domains entirely, for example those serving embedded PDFs (impossible to parse properly) or written in very close German dialects.
<<</Contents>>>
<<<Discussion>>>
Table TABREF23 shows some hand-picked examples. As most of our sources are social medias and forums, the writing style is often colloquial, interspersed with emojis and slang. This perfectly reflects the use of GSW in real life, where speakers switch to High German in formal conversations.
In general, the quality of sentences is good, with few false positives mostly in High German or German dialects, rarer still in Dutch or Luxembourgian. The presence of specific structures in the sentences are often the cause of such mistakes, as they yield strong GSW cues. For example:
High German with spelling mistakes or broken words;
GSW named entities (“Ueli Aeschbacher”, “Züri”);
The presence of many umlauts and/or short words;
The repetition of letters, also used to convey emotions.
The quality of the corpus highly depends on the text extraction step, which itself depends on the HTML structure of the pages. As there are no enforced standards and each website has its own needs, it is impossible to handle all edge cases. For example, some sites use hidden <span> elements to hold information, which become part of the extracted sentences. This is true for watson.ch and was dealt with using a specific rule, but there are still instances we did not detect.
Splitting text into sentences is not a trivial task. Typical segmentation mistakes come from the use of ASCII emojis as punctuation marks (see text sample 3 in Table TABREF23), which are very common in forums. They are hard to detect due to the variability of each individual style. We defined duplicates as having the exact same letters. As such, some sentences may differ by one umlaut and some may be the truncation of others (e.g. excerpts with ellipsis). Finally, the corpus also contains poems and lyrics. Sometimes repetitive and especially hard to segment, they are still an important source of Swiss German online. In any case, they may be filtered out using cues in the sentence length and the URLs.
<<</Discussion>>>
<<</The SwissCrawl Text Corpus>>>
<<<Swiss German Language Modeling>>>
To demonstrate the effectiveness of the SwissCrawl corpus, we conducted a series of experiments for the NLP task of language modeling. The whole code is publicly available on GitHub.
Using the GPT-2 BIBREF18 model in its base configuration (12 layers, 786 hidden states, 12 heads, 117M parameters), we trained three models using different training data:
Leipzig unique sentences from the Leipzig GSW web;
SwissCrawl sentences with a GSW probability $\ge {99}\%$ (see Section SECREF17);
Both the union of 1) and 2).
For each model, the vocabulary is generated using Byte Pair Encoding (BPE) BIBREF19 applied on the training set. The independent test sets are composed of 20K samples from each source.
Table TABREF32 shows the perplexity of the models on each of the test sets. As expected, each model performs better on the test set they have been trained on. When applied to a different test set, both see an increase in perplexity. However, the Leipzig model seems to have more trouble generalizing: its perplexity nearly doubles on the SwissCrawl test set and raises by twenty on the combined test set.
The best results are achieved by combining both corpora: while the perplexity on our corpus only marginally improves (from $49.5$ to $45.9$), the perplexity on the Leipzig corpus improves significantly (from $47.6$ to $30.5$, a 36% relative improvement).
<<</Swiss German Language Modeling>>>
<<<Conclusion>>>
In this paper, we presented the tools developed to gather the most comprehensive collection of written Swiss German to our knowledge. It represents Swiss German in the way it is actually used in informal contexts, both with respect to the form (punctuation, capitalization, ...) and the content (slang, elliptic sentences, ...). We have demonstrated how this new resource can significantly improve Swiss German language modeling. We expect that other NLP tasks, such as LID and eventually machine translation, will also be able to profit from this new resource in the future.
Our experiments support the reasoning that Swiss German is still scarce and very hard to find online. Still, the Internet is in constant evolution and we aim to keep increasing the corpus size by rerunning the tool chain at regular intervals. Another line of future development is the customization of the tools for big social media platforms such as Facebook and Twitter, where most of the content is only accessible through specific APIs.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Conclusion, The SwissCrawl Text Corpus"
],
"type": "disordered_section"
}
|
1912.00159
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Automatic Creation of Text Corpora for Low-Resource Languages from the Internet: The Case of Swiss German
<<<Abstract>>>
This paper presents SwissCrawl, the largest Swiss German text corpus to date. Composed of more than half a million sentences, it was generated using a customized web scraping tool that could be applied to other low-resource languages as well. The approach demonstrates how freely available web pages can be used to construct comprehensive text corpora, which are of fundamental importance for natural language processing. In an experimental evaluation, we show that using the new corpus leads to significant improvements for the task of language modeling. To capture new content, our approach will run continuously to keep increasing the corpus over time.
<<</Abstract>>>
<<<Introduction>>>
Swiss German (“Schwyzerdütsch” or “Schwiizertüütsch”, abbreviated “GSW”) is the name of a large continuum of dialects attached to the Germanic language tree spoken by more than 60% of the Swiss population BIBREF0. Used every day from colloquial conversations to business meetings, Swiss German in its written form has become more and more popular in recent years with the rise of blogs, messaging applications and social media. However, the variability of the written form is rather large as orthography is more based on local pronunciations and emerging conventions than on a unique grammar.
Even though Swiss German is widely spread in Switzerland, there are still few natural language processing (NLP) corpora, studies or tools available BIBREF1. This lack of resources may be explained by the small pool of speakers (less than one percent of the world population), but also the many intrinsic difficulties of Swiss German, including the lack of official writing rules, the high variability across different dialects, and the informal context in which texts are commonly written. Furthermore, there is no official top-level domain (TLD) for Swiss German on the Internet, which renders the automatic collection of Swiss German texts more difficult.
To automate the treatment of Swiss German and foster its adoption in online services such as automatic speech recognition (ASR), we gathered the largest corpus of written Swiss German to date by crawling the web using a customized tool. We highlight the difficulties for finding Swiss German on the web and demonstrate in an experimental evaluation how our text corpus can be used to significantly improve an important NLP task that is a fundamental part of the ASR process: language modeling.
<<</Introduction>>>
<<<Related Work>>>
Few GSW corpora already exists. Although they are very valuable for research on specific aspects of the Swiss German language, they are either highly specialized BIBREF2 BIBREF3 BIBREF4, rather small BIBREF1 (7,305 sentences), or do not offer full sentences BIBREF5.
To our knowledge, the only comprehensive written Swiss German corpus to date comes from the Leipzig corpora collection initiative BIBREF6 offering corpora for more than 136 languages. The Swiss German data has two sources: the Alemannic Wikipedia and web crawls on the .ch domain in 2016 and 2017, leading to a total of 175,399 unique sentences. While the Leipzig Web corpus for Swiss German is of considerable size, we believe this number does not reflect the actual amount of GSW available on the Internet. Furthermore, the enforced sentence structures do not represent the way Swiss German speakers write online.
In this paper, we thus aim at augmenting the Leipzig Web corpus by looking further than the .ch domain and by using a suite of tools specifically designed for retrieving Swiss German.
The idea of using the web as a vast source of linguistic data has been around for decades BIBREF7 and many authors have already addressed its importance for low-resources languages BIBREF8. A common technique is to send queries made of mid-frequency $n$-grams to a search engine to gather bootstrap URLs, which initiate a crawl using a breadth-first strategy in order to gather meaningful information, such as documents or words BIBREF9, BIBREF5.
Existing tools and studies, however, have requirements that are inadequate for the case of Swiss German. For example, GSW is not a language known to search engines BIBREF9, does not have specific TLDs BIBREF10, and lacks good language identification models. Also, GSW documents are too rare to use bootstrapping techniques BIBREF8. Finally, as GSW is scarce and mostly found in comments sections or as part of multilingual web pages (e.g. High German), we cannot afford to “privilege precision over recall” BIBREF11 by focusing on the main content of a page.
As a consequence, our method is based on known techniques that are adapted to deal with those peculiarities. Furthermore, it was designed for having a human in the loop. Its iterative nature makes it possible to refine each step of the tool chain as our knowledge of GSW improves.
<<</Related Work>>>
<<<Proposed System>>>
The two main components of our proposed system are shown in Figure FIGREF1: a seeder that gathers potentially interesting URLs using a Search Engine and a crawler that extracts GSW from web pages, linked together by a MongoDB database. The system is implemented in Python 3, with the full code available on GitHub. Due to the exploratory nature of the task, the tool chain is executed in an iterative manner, allowing us to control and potentially improve the process punctually.
<<<Language Identification>>>
Language identification (LID) is a central component of the pipeline, as it has a strong influence on the final result. In addition, readily available tools are not performing at a satisfying level. For these reasons we created a tailor-made LID system for this situation.
LID has been extensively studied over the past decades BIBREF12 and has achieved impressive results on long monolingual documents in major languages such as English. However, the task becomes more challenging when the pool of training data is small and of high variability, and when the unit of identification is only a sentence.
Free pretrained LIDs supporting GSW such as FastText BIBREF13 are trained on the Alemannic Wikipedia, which encompasses not only GSW, but also German dialects such as Badisch, Elsässisch, Schwäbisch and Vorarlbergisch. This makes the precision of the model insufficient for our purposes.
The dataset used to build our Swiss German LID is based on the Leipzig text corpora BIBREF6, mostly focusing on the texts gathered from the Internet. In preliminary experiments, we have chosen eight language classes shown in Table TABREF4, which give precedence to languages closely related to Swiss German in their structure. In this Table, GSW_LIKE refers to a combination of dialects that are similar to Swiss German but for which we did not have sufficient resources to model classes on their own.
A total of 535,000 sentences are considered for LID with an equal distribution over the eight classes. The 66,684 GSW sentences originate from the Leipzig web corpus 2017 and have been refined during preliminary experiments to exclude obvious non-GSW contents. We use 75% of the data for training, 10% for optimizing system parameters, and 15% for testing the final performance.
Using a pretrained German BERT model BIBREF14 and fine-tuning it on our corpus, we obtain a high LID accuracy of 99.58%. GSW is most confused with German (0.04%) and GSW_LIKE (0.04%). We have also validated the LID system on SMS sentences BIBREF2, where it proves robust for sentences as short as five words.
<<</Language Identification>>>
<<<The Seeder>>>
Query generation has already been extensively studied BIBREF15, BIBREF9. In the case of Swiss German, we tested three different approaches: (a) most frequent trigrams, (b) selection of 2 to 7 random words weighted by their frequency distribution and (c) human-generated queries.
When comparing the corpora generated by 100 seeds of each type, we did not observe significant differences in terms of quantity or quality for the three seeding strategies. On a positive side, $50\%$ of the sentences were different from one seed strategy to the other, suggesting for an approach where strategies are mixed. However, we also observed that (a) tends to yield more similar queries over time and (c) is too time-consuming for practical use.
Considering these observations, we privileged the following approach:
Start with a list of sentences, either from a bootstrap dataset or from sentences from previous crawls using one single sentence per unique URL;
Compute the frequency over the vocabulary, normalizing words to lower case and discarding those having non-alphabetic characters;
Filter out words appearing only once or present in German or English vocabularies;
Generate query seeds by sampling 3 words with a probability following their frequency distribution;
Exclude seeds with more than two single-letter words or having a GSW probability below 95% (see Section SECREF3).
Initial sentences come from the Leipzig web corpus 2017, filtered by means of the LID described in Section SECREF3
Each seed is submitted to startpage.com, a Google Search proxy augmented with privacy features. To ensure GSW is not auto-corrected to High German, each word is first surrounded by double quotes. The first 20 new URLs, i.e. URLs that were never seen before, are saved for further crawling.
<<</The Seeder>>>
<<<The Crawler>>>
The crawler starts with a list of URLs and metadata taken either from a file or from the MongoDB instance, and are added to a task queue with a depth of 0. As illustrated in Figure FIGREF1, each task consists of a series of steps that will download the page content, extract well-formed GSW sentences and add links found on the page to the task queue. At different stages of this pipeline, a decider can intervene in order to stop the processing early. A crawl may also be limited to a given depth, usually set to 3.
<<<Scrape>>>
The raw HTML content is fetched and converted to UTF-8 using a mixture of requests and BeautifulSoup. Boilerplate removal such as navigation and tables uses jusText BIBREF16, but ignores stop words filtering as such a list is not available for GSW. The output is a UTF-8 text containing newlines.
<<</Scrape>>>
<<<Normalize>>>
This stage tries to fix remaining encoding issues using ftfy BIBREF17 and to remove unicode emojis. Another important task is to normalize the unicode code points used for accents, spaces, dashes, quotes etc., and strip any invisible characters. To further improve the usability of the corpus and to simplify tokenization, we also try to enforce one single convention for spaces around quotes and colons, e.g. colons after closing quote, no space inside quotes.
<<</Normalize>>>
<<<Split>>>
To split text into sentences, we implemented Moses' split-sentences.perl in Python and changed it in three main ways: existing newlines are preserved, colons and semi-colons are considered segmentation hints and sentences are not required to start with an uppercase. The latter is especially important as GSW is mostly found in comments where people tend to write fast and without proper casing/punctuation. The list of non-breaking prefixes used is a concatenation of the English and German prefixes found in Moses with few additions.
<<</Split>>>
<<<Filter>>>
Non- or bad- sentences are identified based on a list of $20+$ rules that normal sentences should obey. Most rules are specified in the form of regular expression patterns and boundaries of acceptable occurrences, few compare the ratio of occurrence between two patterns. Examples of such rules in natural language are: “no more than one hashtag”, “no word with more than 30 characters”, “the ratio capitalized/lowercase words is below 1.5”.
<<</Filter>>>
<<<Language ID>>>
Using the LID described in Section SECREF3, sentences with a GSW probability of less than 92% are discarded. This threshold is low on purpose in order to favor recall over precision.
<<</Language ID>>>
<<<Link filter>>>
This component is used to exclude or transform outgoing links found in a page based on duplicates, URL composition, but also specific rules for big social media sites or known blogs. Examples are the exclusion of unrelated national TLDs (.af, .nl, ...) and known media extensions (.pdf, .jpeg, etc.), the stripping of session IDs in URL parameters, and the homogenization of subdomains for sites such as Twitter. Note that filtering is based only on the URL and therefore does not handle redirects or URLs pointing to the same page. This leads to extra work during the crawling, but keeps the whole system simple.
<<</Link filter>>>
<<<Decide>>>
A decider has three main decisions to take. First, based on the metadata associated with an URL, should it be visited? In practice, we visit only new URLs, but the tool is designed in a way such that a recrawl is possible if the page is detected as highly dynamic. The second decision arises at the end of the processing, where the page can be either saved or blacklisted. To favor recall, we currently keep any URL with at least one GSW sentence. Finally, the decider can choose to visit the outgoing links or not. After some trials, we found that following links from pages with more than two new GSW sentences is a reasonable choice, as pages with less sentences are often quotes or false positives.
<<</Decide>>>
<<<Duplicates>>>
During the crawl, the uniqueness of sentences and URLs considers only exact matches. However, when exporting the results, near-duplicate sentences are removed by first stripping any non-letter (including spaces) and making a lowercase comparison. We tried other near-duplicate approaches, but found that they also discarded meaningful writing variations.
<<</Duplicates>>>
<<</The Crawler>>>
<<</Proposed System>>>
<<<State of the Swiss German Web>>>
Table TABREF14 shows the results of running the system three times using 100 seeds on a virtual machine with 5 CPU cores and no GPUs. As expected, the first iteration yields the most new sentences. Unfortunately, the number of newly discovered hosts and sentences decreases exponentially as the system runs, dropping to 20K sentences on the third iteration. This result emphasizes the fact that the amount of GSW on the web is very limited.
The third iteration took also significantly longer, which highlights the difficulties of crawling the web. In this iteration, some URLs had as much as 12 thousand outgoing links that we had to visit before discarding. Another problem arises on web sites where query parameters are used in URLs to encode cookie information and on which duplicate hypotheses cannot be solved unless visiting the links.
On each new search engine query, we go further down the list of results as the top ones may already be known. As such, the percentage of pertinent URLs retrieved (% good, see decider description in Section SECREF13) slowly decreases at each iteration. It is however still above 55% of the retrieved URLs on the third run, indicating a good quality of the seeds.
<<</State of the Swiss German Web>>>
<<<The SwissCrawl Text Corpus>>>
Using the proposed system, we were able to gather more than half a million unique GSW sentences from around the web. The crawling took place between September and November 2019. The corpus is available for download in the form of a CSV file with four columns: text, url, crawl_proba, date, with crawl_proba being the GSW probability returned by the LID system (see Section SECREF3).
<<<Contents>>>
The corpus is composed of 562,524 sentences from 62K URLs among 3,472 domains. The top ten domains (see Table TABREF18) are forums and social media sites. They account for 46% of the whole corpus.
In general, we consider a GSW probability of $\ge {99}\%$, to be indeed Swiss German with high confidence. This represents more than 89% of the corpus (500K) (see Figure FIGREF19). The sentence length varies between 25 and 998 characters with a mean of $92\pm 55$ and a median of 77 (see Figure FIGREF20), while the number of words lies between 4 and 222, with a mean of $16\pm 10$ and a median of 14. This highlights a common pattern in Swiss German writings: used mostly in informal contexts, sentences tend to be short and to include many symbols, such as emojis or repetitive punctuation.
Very long sentences are usually lyrics that lack proper punctuation and thus could not be segmented properly. We however decided to keep them in the final corpus, as they could be useful in specific tasks and are easy to filter out otherwise.
Besides the normalization described in SECREF13, no cleaning nor post-processing is applied to the sentences. This is a deliberate choice to avoid losing any information that could be pertinent for a given task or for further selection. As a result, the mean letter density is 80% and only 61% of sentences both start with an uppercase letter and end with a common punctuation mark (.!?).
Finally, although we performed no human validation per se, we actively monitored the crawling process to spot problematic domains early. This allowed to blacklist some domains entirely, for example those serving embedded PDFs (impossible to parse properly) or written in very close German dialects.
<<</Contents>>>
<<<Discussion>>>
Table TABREF23 shows some hand-picked examples. As most of our sources are social medias and forums, the writing style is often colloquial, interspersed with emojis and slang. This perfectly reflects the use of GSW in real life, where speakers switch to High German in formal conversations.
In general, the quality of sentences is good, with few false positives mostly in High German or German dialects, rarer still in Dutch or Luxembourgian. The presence of specific structures in the sentences are often the cause of such mistakes, as they yield strong GSW cues. For example:
High German with spelling mistakes or broken words;
GSW named entities (“Ueli Aeschbacher”, “Züri”);
The presence of many umlauts and/or short words;
The repetition of letters, also used to convey emotions.
The quality of the corpus highly depends on the text extraction step, which itself depends on the HTML structure of the pages. As there are no enforced standards and each website has its own needs, it is impossible to handle all edge cases. For example, some sites use hidden <span> elements to hold information, which become part of the extracted sentences. This is true for watson.ch and was dealt with using a specific rule, but there are still instances we did not detect.
Splitting text into sentences is not a trivial task. Typical segmentation mistakes come from the use of ASCII emojis as punctuation marks (see text sample 3 in Table TABREF23), which are very common in forums. They are hard to detect due to the variability of each individual style. We defined duplicates as having the exact same letters. As such, some sentences may differ by one umlaut and some may be the truncation of others (e.g. excerpts with ellipsis). Finally, the corpus also contains poems and lyrics. Sometimes repetitive and especially hard to segment, they are still an important source of Swiss German online. In any case, they may be filtered out using cues in the sentence length and the URLs.
<<</Discussion>>>
<<</The SwissCrawl Text Corpus>>>
<<<Swiss German Language Modeling>>>
To demonstrate the effectiveness of the SwissCrawl corpus, we conducted a series of experiments for the NLP task of language modeling. The whole code is publicly available on GitHub.
Using the GPT-2 BIBREF18 model in its base configuration (12 layers, 786 hidden states, 12 heads, 117M parameters), we trained three models using different training data:
Leipzig unique sentences from the Leipzig GSW web;
SwissCrawl sentences with a GSW probability $\ge {99}\%$ (see Section SECREF17);
Both the union of 1) and 2).
For each model, the vocabulary is generated using Byte Pair Encoding (BPE) BIBREF19 applied on the training set. The independent test sets are composed of 20K samples from each source.
Table TABREF32 shows the perplexity of the models on each of the test sets. As expected, each model performs better on the test set they have been trained on. When applied to a different test set, both see an increase in perplexity. However, the Leipzig model seems to have more trouble generalizing: its perplexity nearly doubles on the SwissCrawl test set and raises by twenty on the combined test set.
The best results are achieved by combining both corpora: while the perplexity on our corpus only marginally improves (from $49.5$ to $45.9$), the perplexity on the Leipzig corpus improves significantly (from $47.6$ to $30.5$, a 36% relative improvement).
<<</Swiss German Language Modeling>>>
<<<Conclusion>>>
In this paper, we presented the tools developed to gather the most comprehensive collection of written Swiss German to our knowledge. It represents Swiss German in the way it is actually used in informal contexts, both with respect to the form (punctuation, capitalization, ...) and the content (slang, elliptic sentences, ...). We have demonstrated how this new resource can significantly improve Swiss German language modeling. We expect that other NLP tasks, such as LID and eventually machine translation, will also be able to profit from this new resource in the future.
Our experiments support the reasoning that Swiss German is still scarce and very hard to find online. Still, the Internet is in constant evolution and we aim to keep increasing the corpus size by rerunning the tool chain at regular intervals. Another line of future development is the customization of the tools for big social media platforms such as Facebook and Twitter, where most of the content is only accessible through specific APIs.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Conclusion, Proposed System"
],
"type": "disordered_section"
}
|
1912.00159
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Automatic Creation of Text Corpora for Low-Resource Languages from the Internet: The Case of Swiss German
<<<Abstract>>>
This paper presents SwissCrawl, the largest Swiss German text corpus to date. Composed of more than half a million sentences, it was generated using a customized web scraping tool that could be applied to other low-resource languages as well. The approach demonstrates how freely available web pages can be used to construct comprehensive text corpora, which are of fundamental importance for natural language processing. In an experimental evaluation, we show that using the new corpus leads to significant improvements for the task of language modeling. To capture new content, our approach will run continuously to keep increasing the corpus over time.
<<</Abstract>>>
<<<Introduction>>>
Swiss German (“Schwyzerdütsch” or “Schwiizertüütsch”, abbreviated “GSW”) is the name of a large continuum of dialects attached to the Germanic language tree spoken by more than 60% of the Swiss population BIBREF0. Used every day from colloquial conversations to business meetings, Swiss German in its written form has become more and more popular in recent years with the rise of blogs, messaging applications and social media. However, the variability of the written form is rather large as orthography is more based on local pronunciations and emerging conventions than on a unique grammar.
Even though Swiss German is widely spread in Switzerland, there are still few natural language processing (NLP) corpora, studies or tools available BIBREF1. This lack of resources may be explained by the small pool of speakers (less than one percent of the world population), but also the many intrinsic difficulties of Swiss German, including the lack of official writing rules, the high variability across different dialects, and the informal context in which texts are commonly written. Furthermore, there is no official top-level domain (TLD) for Swiss German on the Internet, which renders the automatic collection of Swiss German texts more difficult.
To automate the treatment of Swiss German and foster its adoption in online services such as automatic speech recognition (ASR), we gathered the largest corpus of written Swiss German to date by crawling the web using a customized tool. We highlight the difficulties for finding Swiss German on the web and demonstrate in an experimental evaluation how our text corpus can be used to significantly improve an important NLP task that is a fundamental part of the ASR process: language modeling.
<<</Introduction>>>
<<<Related Work>>>
Few GSW corpora already exists. Although they are very valuable for research on specific aspects of the Swiss German language, they are either highly specialized BIBREF2 BIBREF3 BIBREF4, rather small BIBREF1 (7,305 sentences), or do not offer full sentences BIBREF5.
To our knowledge, the only comprehensive written Swiss German corpus to date comes from the Leipzig corpora collection initiative BIBREF6 offering corpora for more than 136 languages. The Swiss German data has two sources: the Alemannic Wikipedia and web crawls on the .ch domain in 2016 and 2017, leading to a total of 175,399 unique sentences. While the Leipzig Web corpus for Swiss German is of considerable size, we believe this number does not reflect the actual amount of GSW available on the Internet. Furthermore, the enforced sentence structures do not represent the way Swiss German speakers write online.
In this paper, we thus aim at augmenting the Leipzig Web corpus by looking further than the .ch domain and by using a suite of tools specifically designed for retrieving Swiss German.
The idea of using the web as a vast source of linguistic data has been around for decades BIBREF7 and many authors have already addressed its importance for low-resources languages BIBREF8. A common technique is to send queries made of mid-frequency $n$-grams to a search engine to gather bootstrap URLs, which initiate a crawl using a breadth-first strategy in order to gather meaningful information, such as documents or words BIBREF9, BIBREF5.
Existing tools and studies, however, have requirements that are inadequate for the case of Swiss German. For example, GSW is not a language known to search engines BIBREF9, does not have specific TLDs BIBREF10, and lacks good language identification models. Also, GSW documents are too rare to use bootstrapping techniques BIBREF8. Finally, as GSW is scarce and mostly found in comments sections or as part of multilingual web pages (e.g. High German), we cannot afford to “privilege precision over recall” BIBREF11 by focusing on the main content of a page.
As a consequence, our method is based on known techniques that are adapted to deal with those peculiarities. Furthermore, it was designed for having a human in the loop. Its iterative nature makes it possible to refine each step of the tool chain as our knowledge of GSW improves.
<<</Related Work>>>
<<<Proposed System>>>
The two main components of our proposed system are shown in Figure FIGREF1: a seeder that gathers potentially interesting URLs using a Search Engine and a crawler that extracts GSW from web pages, linked together by a MongoDB database. The system is implemented in Python 3, with the full code available on GitHub. Due to the exploratory nature of the task, the tool chain is executed in an iterative manner, allowing us to control and potentially improve the process punctually.
<<<Language Identification>>>
Language identification (LID) is a central component of the pipeline, as it has a strong influence on the final result. In addition, readily available tools are not performing at a satisfying level. For these reasons we created a tailor-made LID system for this situation.
LID has been extensively studied over the past decades BIBREF12 and has achieved impressive results on long monolingual documents in major languages such as English. However, the task becomes more challenging when the pool of training data is small and of high variability, and when the unit of identification is only a sentence.
Free pretrained LIDs supporting GSW such as FastText BIBREF13 are trained on the Alemannic Wikipedia, which encompasses not only GSW, but also German dialects such as Badisch, Elsässisch, Schwäbisch and Vorarlbergisch. This makes the precision of the model insufficient for our purposes.
The dataset used to build our Swiss German LID is based on the Leipzig text corpora BIBREF6, mostly focusing on the texts gathered from the Internet. In preliminary experiments, we have chosen eight language classes shown in Table TABREF4, which give precedence to languages closely related to Swiss German in their structure. In this Table, GSW_LIKE refers to a combination of dialects that are similar to Swiss German but for which we did not have sufficient resources to model classes on their own.
A total of 535,000 sentences are considered for LID with an equal distribution over the eight classes. The 66,684 GSW sentences originate from the Leipzig web corpus 2017 and have been refined during preliminary experiments to exclude obvious non-GSW contents. We use 75% of the data for training, 10% for optimizing system parameters, and 15% for testing the final performance.
Using a pretrained German BERT model BIBREF14 and fine-tuning it on our corpus, we obtain a high LID accuracy of 99.58%. GSW is most confused with German (0.04%) and GSW_LIKE (0.04%). We have also validated the LID system on SMS sentences BIBREF2, where it proves robust for sentences as short as five words.
<<</Language Identification>>>
<<<The Seeder>>>
Query generation has already been extensively studied BIBREF15, BIBREF9. In the case of Swiss German, we tested three different approaches: (a) most frequent trigrams, (b) selection of 2 to 7 random words weighted by their frequency distribution and (c) human-generated queries.
When comparing the corpora generated by 100 seeds of each type, we did not observe significant differences in terms of quantity or quality for the three seeding strategies. On a positive side, $50\%$ of the sentences were different from one seed strategy to the other, suggesting for an approach where strategies are mixed. However, we also observed that (a) tends to yield more similar queries over time and (c) is too time-consuming for practical use.
Considering these observations, we privileged the following approach:
Start with a list of sentences, either from a bootstrap dataset or from sentences from previous crawls using one single sentence per unique URL;
Compute the frequency over the vocabulary, normalizing words to lower case and discarding those having non-alphabetic characters;
Filter out words appearing only once or present in German or English vocabularies;
Generate query seeds by sampling 3 words with a probability following their frequency distribution;
Exclude seeds with more than two single-letter words or having a GSW probability below 95% (see Section SECREF3).
Initial sentences come from the Leipzig web corpus 2017, filtered by means of the LID described in Section SECREF3
Each seed is submitted to startpage.com, a Google Search proxy augmented with privacy features. To ensure GSW is not auto-corrected to High German, each word is first surrounded by double quotes. The first 20 new URLs, i.e. URLs that were never seen before, are saved for further crawling.
<<</The Seeder>>>
<<<The Crawler>>>
The crawler starts with a list of URLs and metadata taken either from a file or from the MongoDB instance, and are added to a task queue with a depth of 0. As illustrated in Figure FIGREF1, each task consists of a series of steps that will download the page content, extract well-formed GSW sentences and add links found on the page to the task queue. At different stages of this pipeline, a decider can intervene in order to stop the processing early. A crawl may also be limited to a given depth, usually set to 3.
<<<Scrape>>>
The raw HTML content is fetched and converted to UTF-8 using a mixture of requests and BeautifulSoup. Boilerplate removal such as navigation and tables uses jusText BIBREF16, but ignores stop words filtering as such a list is not available for GSW. The output is a UTF-8 text containing newlines.
<<</Scrape>>>
<<<Normalize>>>
This stage tries to fix remaining encoding issues using ftfy BIBREF17 and to remove unicode emojis. Another important task is to normalize the unicode code points used for accents, spaces, dashes, quotes etc., and strip any invisible characters. To further improve the usability of the corpus and to simplify tokenization, we also try to enforce one single convention for spaces around quotes and colons, e.g. colons after closing quote, no space inside quotes.
<<</Normalize>>>
<<<Split>>>
To split text into sentences, we implemented Moses' split-sentences.perl in Python and changed it in three main ways: existing newlines are preserved, colons and semi-colons are considered segmentation hints and sentences are not required to start with an uppercase. The latter is especially important as GSW is mostly found in comments where people tend to write fast and without proper casing/punctuation. The list of non-breaking prefixes used is a concatenation of the English and German prefixes found in Moses with few additions.
<<</Split>>>
<<<Filter>>>
Non- or bad- sentences are identified based on a list of $20+$ rules that normal sentences should obey. Most rules are specified in the form of regular expression patterns and boundaries of acceptable occurrences, few compare the ratio of occurrence between two patterns. Examples of such rules in natural language are: “no more than one hashtag”, “no word with more than 30 characters”, “the ratio capitalized/lowercase words is below 1.5”.
<<</Filter>>>
<<<Language ID>>>
Using the LID described in Section SECREF3, sentences with a GSW probability of less than 92% are discarded. This threshold is low on purpose in order to favor recall over precision.
<<</Language ID>>>
<<<Link filter>>>
This component is used to exclude or transform outgoing links found in a page based on duplicates, URL composition, but also specific rules for big social media sites or known blogs. Examples are the exclusion of unrelated national TLDs (.af, .nl, ...) and known media extensions (.pdf, .jpeg, etc.), the stripping of session IDs in URL parameters, and the homogenization of subdomains for sites such as Twitter. Note that filtering is based only on the URL and therefore does not handle redirects or URLs pointing to the same page. This leads to extra work during the crawling, but keeps the whole system simple.
<<</Link filter>>>
<<<Decide>>>
A decider has three main decisions to take. First, based on the metadata associated with an URL, should it be visited? In practice, we visit only new URLs, but the tool is designed in a way such that a recrawl is possible if the page is detected as highly dynamic. The second decision arises at the end of the processing, where the page can be either saved or blacklisted. To favor recall, we currently keep any URL with at least one GSW sentence. Finally, the decider can choose to visit the outgoing links or not. After some trials, we found that following links from pages with more than two new GSW sentences is a reasonable choice, as pages with less sentences are often quotes or false positives.
<<</Decide>>>
<<<Duplicates>>>
During the crawl, the uniqueness of sentences and URLs considers only exact matches. However, when exporting the results, near-duplicate sentences are removed by first stripping any non-letter (including spaces) and making a lowercase comparison. We tried other near-duplicate approaches, but found that they also discarded meaningful writing variations.
<<</Duplicates>>>
<<</The Crawler>>>
<<</Proposed System>>>
<<<State of the Swiss German Web>>>
Table TABREF14 shows the results of running the system three times using 100 seeds on a virtual machine with 5 CPU cores and no GPUs. As expected, the first iteration yields the most new sentences. Unfortunately, the number of newly discovered hosts and sentences decreases exponentially as the system runs, dropping to 20K sentences on the third iteration. This result emphasizes the fact that the amount of GSW on the web is very limited.
The third iteration took also significantly longer, which highlights the difficulties of crawling the web. In this iteration, some URLs had as much as 12 thousand outgoing links that we had to visit before discarding. Another problem arises on web sites where query parameters are used in URLs to encode cookie information and on which duplicate hypotheses cannot be solved unless visiting the links.
On each new search engine query, we go further down the list of results as the top ones may already be known. As such, the percentage of pertinent URLs retrieved (% good, see decider description in Section SECREF13) slowly decreases at each iteration. It is however still above 55% of the retrieved URLs on the third run, indicating a good quality of the seeds.
<<</State of the Swiss German Web>>>
<<<The SwissCrawl Text Corpus>>>
Using the proposed system, we were able to gather more than half a million unique GSW sentences from around the web. The crawling took place between September and November 2019. The corpus is available for download in the form of a CSV file with four columns: text, url, crawl_proba, date, with crawl_proba being the GSW probability returned by the LID system (see Section SECREF3).
<<<Contents>>>
The corpus is composed of 562,524 sentences from 62K URLs among 3,472 domains. The top ten domains (see Table TABREF18) are forums and social media sites. They account for 46% of the whole corpus.
In general, we consider a GSW probability of $\ge {99}\%$, to be indeed Swiss German with high confidence. This represents more than 89% of the corpus (500K) (see Figure FIGREF19). The sentence length varies between 25 and 998 characters with a mean of $92\pm 55$ and a median of 77 (see Figure FIGREF20), while the number of words lies between 4 and 222, with a mean of $16\pm 10$ and a median of 14. This highlights a common pattern in Swiss German writings: used mostly in informal contexts, sentences tend to be short and to include many symbols, such as emojis or repetitive punctuation.
Very long sentences are usually lyrics that lack proper punctuation and thus could not be segmented properly. We however decided to keep them in the final corpus, as they could be useful in specific tasks and are easy to filter out otherwise.
Besides the normalization described in SECREF13, no cleaning nor post-processing is applied to the sentences. This is a deliberate choice to avoid losing any information that could be pertinent for a given task or for further selection. As a result, the mean letter density is 80% and only 61% of sentences both start with an uppercase letter and end with a common punctuation mark (.!?).
Finally, although we performed no human validation per se, we actively monitored the crawling process to spot problematic domains early. This allowed to blacklist some domains entirely, for example those serving embedded PDFs (impossible to parse properly) or written in very close German dialects.
<<</Contents>>>
<<<Discussion>>>
Table TABREF23 shows some hand-picked examples. As most of our sources are social medias and forums, the writing style is often colloquial, interspersed with emojis and slang. This perfectly reflects the use of GSW in real life, where speakers switch to High German in formal conversations.
In general, the quality of sentences is good, with few false positives mostly in High German or German dialects, rarer still in Dutch or Luxembourgian. The presence of specific structures in the sentences are often the cause of such mistakes, as they yield strong GSW cues. For example:
High German with spelling mistakes or broken words;
GSW named entities (“Ueli Aeschbacher”, “Züri”);
The presence of many umlauts and/or short words;
The repetition of letters, also used to convey emotions.
The quality of the corpus highly depends on the text extraction step, which itself depends on the HTML structure of the pages. As there are no enforced standards and each website has its own needs, it is impossible to handle all edge cases. For example, some sites use hidden <span> elements to hold information, which become part of the extracted sentences. This is true for watson.ch and was dealt with using a specific rule, but there are still instances we did not detect.
Splitting text into sentences is not a trivial task. Typical segmentation mistakes come from the use of ASCII emojis as punctuation marks (see text sample 3 in Table TABREF23), which are very common in forums. They are hard to detect due to the variability of each individual style. We defined duplicates as having the exact same letters. As such, some sentences may differ by one umlaut and some may be the truncation of others (e.g. excerpts with ellipsis). Finally, the corpus also contains poems and lyrics. Sometimes repetitive and especially hard to segment, they are still an important source of Swiss German online. In any case, they may be filtered out using cues in the sentence length and the URLs.
<<</Discussion>>>
<<</The SwissCrawl Text Corpus>>>
<<<Swiss German Language Modeling>>>
To demonstrate the effectiveness of the SwissCrawl corpus, we conducted a series of experiments for the NLP task of language modeling. The whole code is publicly available on GitHub.
Using the GPT-2 BIBREF18 model in its base configuration (12 layers, 786 hidden states, 12 heads, 117M parameters), we trained three models using different training data:
Leipzig unique sentences from the Leipzig GSW web;
SwissCrawl sentences with a GSW probability $\ge {99}\%$ (see Section SECREF17);
Both the union of 1) and 2).
For each model, the vocabulary is generated using Byte Pair Encoding (BPE) BIBREF19 applied on the training set. The independent test sets are composed of 20K samples from each source.
Table TABREF32 shows the perplexity of the models on each of the test sets. As expected, each model performs better on the test set they have been trained on. When applied to a different test set, both see an increase in perplexity. However, the Leipzig model seems to have more trouble generalizing: its perplexity nearly doubles on the SwissCrawl test set and raises by twenty on the combined test set.
The best results are achieved by combining both corpora: while the perplexity on our corpus only marginally improves (from $49.5$ to $45.9$), the perplexity on the Leipzig corpus improves significantly (from $47.6$ to $30.5$, a 36% relative improvement).
<<</Swiss German Language Modeling>>>
<<<Conclusion>>>
In this paper, we presented the tools developed to gather the most comprehensive collection of written Swiss German to our knowledge. It represents Swiss German in the way it is actually used in informal contexts, both with respect to the form (punctuation, capitalization, ...) and the content (slang, elliptic sentences, ...). We have demonstrated how this new resource can significantly improve Swiss German language modeling. We expect that other NLP tasks, such as LID and eventually machine translation, will also be able to profit from this new resource in the future.
Our experiments support the reasoning that Swiss German is still scarce and very hard to find online. Still, the Internet is in constant evolution and we aim to keep increasing the corpus size by rerunning the tool chain at regular intervals. Another line of future development is the customization of the tools for big social media platforms such as Facebook and Twitter, where most of the content is only accessible through specific APIs.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Abstract, Introduction"
],
"type": "disordered_section"
}
|
1909.05855
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Towards Scalable Multi-domain Conversational Agents: The Schema-Guided Dialogue Dataset
<<<Abstract>>>
Virtual assistants such as Google Assistant, Alexa and Siri provide a conversational interface to a large number of services and APIs spanning multiple domains. Such systems need to support an ever-increasing number of services with possibly overlapping functionality. Furthermore, some of these services have little to no training data available. Existing public datasets for task-oriented dialogue do not sufficiently capture these challenges since they cover few domains and assume a single static ontology per domain. In this work, we introduce the the Schema-Guided Dialogue (SGD) dataset, containing over 16k multi-domain conversations spanning 16 domains. Our dataset exceeds the existing task-oriented dialogue corpora in scale, while also highlighting the challenges associated with building large-scale virtual assistants. It provides a challenging testbed for a number of tasks including language understanding, slot filling, dialogue state tracking and response generation. Along the same lines, we present a schema-guided paradigm for task-oriented dialogue, in which predictions are made over a dynamic set of intents and slots, provided as input, using their natural language descriptions. This allows a single dialogue system to easily support a large number of services and facilitates simple integration of new services without requiring additional training data. Building upon the proposed paradigm, we release a model for dialogue state tracking capable of zero-shot generalization to new APIs, while remaining competitive in the regular setting.
<<</Abstract>>>
<<<Introduction>>>
Virtual assistants help users accomplish tasks including but not limited to finding flights, booking restaurants and, more recently, navigating user interfaces, by providing a natural language interface to services and APIs on the web. The recent popularity of conversational interfaces and the advent of frameworks like Actions on Google and Alexa Skills, which allow developers to easily add support for new services, has resulted in a major increase in the number of application domains and individual services that assistants need to support, following the pattern of smartphone applications.
Consequently, recent work has focused on scalable dialogue systems that can handle tasks across multiple application domains. Data-driven deep learning based approaches for multi-domain modeling have shown promise, both for end-to-end and modular systems involving dialogue state tracking and policy learning. This line of work has been facilitated by the release of multi-domain dialogue corpora such as MultiWOZ BIBREF0, M2M BIBREF1 and FRAMES BIBREF2.
However, existing datasets for multi-domain task-oriented dialogue do not sufficiently capture a number of challenges that arise with scaling virtual assistants in production. These assistants need to support a large BIBREF3, constantly increasing number of services over a large number of domains. In comparison, existing public datasets cover few domains. Furthermore, they define a single static API per domain, whereas multiple services with overlapping functionality, but heterogeneous interfaces, exist in the real world.
To highlight these challenges, we introduce the Schema-Guided Dialogue (SGD) dataset, which is, to the best of our knowledge, the largest public task-oriented dialogue corpus. It exceeds existing corpora in scale, with over 16000 dialogues in the training set spanning 26 services belonging to 16 domains (more details in Table TABREF2). Further, to adequately test the models' ability to generalize in zero-shot settings, the evaluation sets contain unseen services and domains. The dataset is designed to serve as an effective testbed for intent prediction, slot filling, state tracking and language generation, among other tasks in large-scale virtual assistants.
We also propose the schema-guided paradigm for task-oriented dialogue, advocating building a single unified dialogue model for all services and APIs. Using a service's schema as input, the model would make predictions over this dynamic set of intents and slots present in the schema. This setting enables effective sharing of knowledge among all services, by relating the semantic information in the schemas, and allows the model to handle unseen services and APIs. Under the proposed paradigm, we present a novel architecture for multi-domain dialogue state tracking. By using large pretrained models like BERT BIBREF4, our model can generalize to unseen services and is robust to API changes, while achieving state-of-the-art results on the original and updated BIBREF5 MultiWOZ datasets.
<<</Introduction>>>
<<<Related Work>>>
Task-oriented dialogue systems have constituted an active area of research for decades. The growth of this field has been consistently fueled by the development of new datasets. Initial datasets were limited to one domain, such as ATIS BIBREF6 for spoken language understanding for flights. The Dialogue State Tracking Challenges BIBREF7, BIBREF8, BIBREF9, BIBREF10 contributed to the creation of dialogue datasets with increasing complexity. Other notable related datasets include WOZ2.0 BIBREF11, FRAMES BIBREF2, M2M BIBREF1 and MultiWOZ BIBREF0. These datasets have utilized a variety of data collection techniques, falling within two broad categories:
Wizard-of-Oz This setup BIBREF12 connects two crowd workers playing the roles of the user and the system. The user is provided a goal to satisfy, and the system accesses a database of entities, which it queries as per the user's preferences. WOZ2.0, FRAMES and MultiWOZ, among others, have utilized such methods.
Machine-machine Interaction A related line of work explores simulation-based dialogue generation, where the user and system roles are simulated to generate a complete conversation flow, which can then be converted to natural language using crowd workers BIBREF1. Such a framework may be cost-effective and error-resistant since the underlying crowd worker task is simpler, and semantic annotations are obtained automatically.
As virtual assistants incorporate diverse domains, recent work has focused on zero-shot modeling BIBREF13, BIBREF14, BIBREF15, domain adaptation and transfer learning techniques BIBREF16. Deep-learning based approaches have achieved state of the art performance on dialogue state tracking tasks. Popular approaches on small-scale datasets estimate the dialogue state as a distribution over all possible slot-values BIBREF17, BIBREF11 or individually score all slot-value combinations BIBREF18, BIBREF19. Such approaches are not practical for deployment in virtual assistants operating over real-world services having a very large and dynamic set of possible values. Addressing these concerns, approaches utilizing a dynamic vocabulary of slot values have been proposed BIBREF20, BIBREF21, BIBREF22.
<<</Related Work>>>
<<<The Schema-Guided Dialogue Dataset>>>
An important goal of this work is to create a benchmark dataset highlighting the challenges associated with building large-scale virtual assistants. Table TABREF2 compares our dataset with other public datasets. Our Schema-Guided Dialogue (SGD) dataset exceeds other datasets in most of the metrics at scale. The especially larger number of domains, slots, and slot values, and the presence of multiple services per domain, are representative of these scale-related challenges. Furthermore, our evaluation sets contain many services, and consequently slots, which are not present in the training set, to help evaluate model performance on unseen services.
The 17 domains (`Alarm' domain not included in training) present in our dataset are listed in Table TABREF5. We create synthetic implementations of a total of 34 services or APIs over these domains. Our simulator framework interacts with these services to generate dialogue outlines, which are a structured representation of dialogue semantics. We then used a crowd-sourcing procedure to paraphrase these outlines to natural language utterances. Our novel crowd-sourcing procedure preserves all annotations obtained from the simulator and does not require any extra annotations after dialogue collection. In this section, we describe these steps in detail and then present analyses of the collected dataset.
<<<Services and APIs>>>
We define the schema for a service as a combination of intents and slots with additional constraints, with an example in Figure FIGREF7. We implement all services using a SQL engine. For constructing the underlying tables, we sample a set of entities from Freebase and obtain the values for slots defined in the schema from the appropriate attribute in Freebase. We decided to use Freebase to sample real-world entities instead of synthetic ones since entity attributes are often correlated (e.g, a restaurant's name is indicative of the cuisine served). Some slots like event dates/times and available ticket counts, which are not present in Freebase, are synthetically sampled.
To reflect the constraints present in real-world services and APIs, we impose a few other restrictions. First, our dataset does not expose the set of all possible slot values for some slots. Having such a list is impractical for slots like date or time because they have infinitely many possible values or for slots like movie or song names, for which new values are periodically added. Our dataset specifically identifies such slots as non-categorical and does not provide a set of all possible values for these. We also ensure that the evaluation sets have a considerable fraction of slot values not present in the training set to evaluate the models in the presence of new values. Some slots like gender, number of people, day of the week etc. are defined as categorical and we specify the set of all possible values taken by them. However, these values are not assumed to be consistent across services. E.g., different services may use (`male', `female'), (`M', `F') or (`he', `she') as possible values for gender slot.
Second, real-world services can only be invoked with a limited number of slot combinations: e.g. restaurant reservation APIs do not let the user search for restaurants by date without specifying a location. However, existing datasets simplistically allow service calls with any given combination of slot values, thus giving rise to flows unsupported by actual services or APIs. As in Figure FIGREF7, the different service calls supported by a service are listed as intents. Each intent specifies a set of required slots and the system is not allowed to call this intent without specifying values for these required slots. Each intent also lists a set of optional slots with default values, which the user can override.
<<</Services and APIs>>>
<<<Dialogue Simulator Framework>>>
The dialogue simulator interacts with the services to generate dialogue outlines. Figure FIGREF9 shows the overall architecture of our dialogue simulator framework. It consists of two agents playing the roles of the user and the system. Both agents interact with each other using a finite set of actions specified through dialogue acts over a probabilistic automaton designed to capture varied dialogue trajectories. These dialogue acts can take a slot or a slot-value pair as argument. Figure FIGREF13 shows all dialogue acts supported by the agents.
At the start of a conversation, the user agent is seeded with a scenario, which is a sequence of intents to be fulfilled. We identified over 200 distinct scenarios for the training set, each comprising up to 5 intents. For multi-domain dialogues, we also identify combinations of slots whose values may be transferred when switching intents e.g. the 'address' slot value in a restaurant service could be transferred to the 'destination' slot for a taxi service invoked right after.
The user agent then generates the dialogue acts to be output in the next turn. It may retrieve arguments i.e. slot values for some of the generated acts by accessing either the service schema or the raw SQL backend. The acts, combined with the respective parameters yield the corresponding user actions. Next, the system agent generates the next set of actions using a similar procedure. Unlike the user agent, however, the system agent has restricted access to the services (denoted by dashed line), e.g. it can only query the services by supplying values for all required slots for some service call. This helps us ensure that all generated flows are valid.
After an intent is fulfilled through a series of user and system actions, the user agent queries the scenario to proceed to the next intent. Alternatively, the system may suggest related intents e.g. reserving a table after searching for a restaurant. The simulator also allows for multiple intents to be active during a given turn. While we skip many implementation details for brevity, it is worth noting that we do not include any domain-specific constraints in the simulation automaton. All domain-specific constraints are encoded in the schema and scenario, allowing us to conveniently use the simulator across a wide variety of domains and services.
<<</Dialogue Simulator Framework>>>
<<<Dialogue Paraphrasing>>>
The dialogue paraphrasing framework converts the outlines generated by the simulator into a natural conversation. Figure FIGREF11a shows a snippet of the dialogue outline generated by the simulator, containing a sequence of user and system actions. The slot values present in these actions are in a canonical form because they obtained directly from the service. However, users may refer to these values in various different ways during the conversation, e.g., “los angeles" may be referred to as “LA" or “LAX". To introduce these natural variations in the slot values, we replace different slot values with a randomly selected variation (kept consistent across user turns in a dialogue) as shown in Figure FIGREF11b.
Next we define a set of action templates for converting each action into a utterance. A few examples of such templates are shown below. These templates are used to convert each action into a natural language utterance, and the resulting utterances for the different actions in a turn are concatenated together as shown in Figure FIGREF11c. The dialogue transformed by these steps is then sent to the crowd workers. One crowd worker is tasked with paraphrasing all utterances of a dialogue to ensure naturalness and coherence.
In our paraphrasing task, the crowd workers are instructed to exactly repeat the slot values in their paraphrases. This not only helps us verify the correctness of the paraphrases, but also lets us automatically obtain slot spans in the generated utterances by string search. This automatic slot span generation greatly reduced the annotation effort required, with little impact on dialogue naturalness, thus allowing us to collect more data with the same resources. Furthermore, it is important to note that this entire procedure preserves all other annotations obtained from the simulator including the dialogue state. Hence, no further annotation is needed.
<<</Dialogue Paraphrasing>>>
<<<Dataset Analysis>>>
With over 16000 dialogues in the training set, the Schema-Guided Dialogue dataset is the largest publicly available annotated task-oriented dialogue dataset. The annotations include the active intents and dialogue states for each user utterance and the system actions for every system utterance. We have a few other annotations like the user actions but we withhold them from the public release. These annotations enable our dataset to be used as benchmark for tasks like intent detection, dialogue state tracking, imitation learning of dialogue policy, dialogue act to text generation etc. The schemas contain semantic information about the schema and the constituent intents and slots, in the form of natural language descriptions and other details (example in Figure FIGREF7).
The single-domain dialogues in our dataset contain an average of 15.3 turns, whereas the multi-domain ones contain 23 turns on an average. These numbers are also reflected in Figure FIGREF13 showing the histogram of dialogue lengths on the training set. Table TABREF5 shows the distribution of dialogues across the different domains. We note that the dataset is largely balanced in terms of the domains and services covered, with the exception of Alarm domain, which is only present in the development set. Figure FIGREF13 shows the frequency of dialogue acts contained in the dataset. Note that all dialogue acts except INFORM, REQUEST and GOODBYE are specific to either the user or the system.
<<</Dataset Analysis>>>
<<</The Schema-Guided Dialogue Dataset>>>
<<<The Schema-Guided Approach>>>
Virtual assistants aim to support a large number of services available on the web. One possible approach is to define a large unified schema for the assistant, to which different service providers can integrate with. However, it is difficult to come up with a common schema covering all use cases. Having a common schema also complicates integration of tail services with limited developer support. We propose the schema-guided approach as an alternative to allow easy integration of new services and APIs.
Under our proposed approach, each service provides a schema listing the supported slots and intents along with their natural language descriptions (Figure FIGREF7 shows an example). These descriptions are used to obtain a semantic representation of these schema elements. The assistant employs a single unified model containing no domain or service specific parameters to make predictions conditioned on these schema elements. For example, Figure FIGREF14 shows how dialogue state representation for the same dialogue can vary for two different services. Here, the departure and arrival cities are captured by analogously functioning but differently named slots in both schemas. Furthermore, values for the number_stops and direct_only slots highlight idiosyncrasies between services interpreting the same concept.
There are many advantages to this approach. First, using a single model facilitates representation and transfer of common knowledge across related services. Second, since the model utilizes semantic representation of schema elements as input, it can interface with unseen services or APIs on which it has not been trained. Third, it is robust to changes like addition of new intents or slots to the service.
<<</The Schema-Guided Approach>>>
<<<Zero-Shot Dialogue State Tracking>>>
Models in the schema-guided setting can condition on the pertinent services' schemas using descriptions of intents and slots. These models, however, also need access to representations for potentially unseen inputs from new services. Recent pretrained models like ELMo BIBREF23 and BERT BIBREF4 can help, since they are trained on very large corpora. Building upon these, we present our zero-shot schema-guided dialogue state tracking model.
<<<Model>>>
We use a single model, shared among all services and domains, to make these predictions. We first encode all the intents, slots and slot values for categorical slots present in the schema into an embedded representation. Since different schemas can have differing numbers of intents or slots, predictions are made over dynamic sets of schema elements by conditioning them on the corresponding schema embeddings. This is in contrast to existing models which make predictions over a static schema and are hence unable to share knowledge across domains and services. They are also not robust to changes in schema and require the model to be retrained with new annotated data upon addition of a new intent, slot, or in some cases, a slot value to a service.
<<<Schema Embedding>>>
This component obtains the embedded representations of intents, slots and categorical slot values in each service schema. Table TABREF18 shows the sequence pairs used for embedding each schema element. These sequence pairs are fed to a pretrained BERT encoder shown in Figure FIGREF20 and the output $\mathbf {u}_{\texttt {CLS}}$ is used as the schema embedding.
For a given service with $I$ intents and $S$ slots, let $\lbrace \mathbf {i}_j\rbrace $, ${1 \le j \le I}$ and $\lbrace \mathbf {s}_j\rbrace $, ${1 \le j \le S}$ be the embeddings of all intents and slots respectively. As a special case, we let $\lbrace \mathbf {s}^n_j\rbrace $, ${1 \le j \le N \le S}$ denote the embeddings for the $N$ non-categorical slots in the service. Also, let $\lbrace \textbf {v}_j^k\rbrace $, $1 \le j \le V^k$ denote the embeddings for all possible values taken by the $k^{\text{th}}$ categorical slot, $1 \le k \le C$, with $C$ being the number of categorical slots and $N + C = S$. All these embeddings are collectively called schema embeddings.
<<</Schema Embedding>>>
<<<Utterance Encoding>>>
Like BIBREF24, we use BERT to encode the user utterance and the preceding system utterance to obtain utterance pair embedding $\mathbf {u} = \mathbf {u}_{\texttt {CLS}}$ and token level representations $\mathbf {t}_1, \mathbf {t}_2 \cdots \mathbf {t}_M$, $M$ being the total number of tokens in the two utterances. The utterance and schema embeddings are used together to obtain model predictions using a set of projections (defined below).
<<</Utterance Encoding>>>
<<<Projection>>>
Let $\mathbf {x}, \mathbf {y} \in \mathbb {R}^d$. For a task $K$, we define $\mathbf {l} = \mathcal {F}_K(\mathbf {x}, \mathbf {y}, p)$ as a projection transforming $\mathbf {x}$ and $\mathbf {y}$ into the vector $\mathbf {l} \in \mathbb {R}^p$ using Equations DISPLAY_FORM22-. Here, $\mathbf {h_1},\mathbf {h_2} \in \mathbb {R}^d$, $W^K_i$ and $b^K_i$ for $1 \le i \le 3$ are trainable parameters of suitable dimensions and $A$ is the activation function. We use $\texttt {gelu}$ BIBREF25 activation as in BERT.
<<</Projection>>>
<<<Active Intent>>>
For a given service, the active intent denotes the intent requested by the user and currently being fulfilled by the system. It takes the value “NONE" if no intent for the service is currently being processed. Let $\mathbf {i}_0$ be a trainable parameter in $\mathbb {R}^d$ for the “NONE" intent. We define the intent network as below.
The logits $l^{j}_{\text{int}}$ are normalized using softmax to yield a distribution over all $I$ intents and the “NONE" intent. During inference, we predict the highest probability intent as active.
<<</Active Intent>>>
<<<Requested Slots>>>
These are the slots whose values are requested by the user in the current utterance. Projection $\mathcal {F}_{\text{req}}$ predicts logit $l^j_{\text{req}}$ for the $j^{\text{th}}$ slot. Obtained logits are normalized using sigmoid to get a score in $[0,1]$. During inference, all slots with $\text{score} > 0.5$ are predicted as requested.
<<</Requested Slots>>>
<<<User Goal>>>
We define the user goal as the user constraints specified over the dialogue context till the current user utterance. Instead of predicting the entire user goal after each user utterance, we predict the difference between the user goal for the current turn and preceding user turn. During inference, the predicted user goal updates are accumulated to yield the predicted user goal. We predict the user goal updates in two stages. First, for each slot, a distribution of size 3 denoting the slot status and taking values none, dontcare and active is obtained by normalizing the logits obtained in equation DISPLAY_FORM28 using softmax. If the status of a slot is predicted to be none, its assigned value is assumed to be unchanged. If the prediction is dontcare, then the special dontcare value is assigned to it. Otherwise, a slot value is predicted and assigned to it in the second stage.
In the second stage, equation is used to obtain a logit for each value taken by each categorical slot. Logits for a given categorical slot are normalized using softmax to get a distribution over all possible values. The value with the maximum mass is assigned to the slot. For each non-categorical slot, logits obtained using equations and are normalized using softmax to yield two distributions over all tokens. These two distributions respectively correspond to the start and end index of the span corresponding to the slot. The indices $p \le q$ maximizing $start[p] + end[q]$ are predicted to be the span boundary and the corresponding value is assigned to the slot.
<<</User Goal>>>
<<</Model>>>
<<<Evaluation>>>
We consider the following metrics for evaluation of the dialogue state tracking task:
Active Intent Accuracy: The fraction of user turns for which the active intent has been correctly predicted.
Requested Slot F1: The macro-averaged F1 score for requested slots over all eligible turns. Turns with no requested slots in ground truth and predictions are skipped.
Average Goal Accuracy: For each turn, we predict a single value for each slot present in the dialogue state. The slots which have a non-empty assignment in the ground truth dialogue state are considered for accuracy. This is the average accuracy of predicting the value of a slot correctly. A fuzzy matching score is used for non-categorical slots to reward partial matches with the ground truth.
Joint Goal Accuracy: This is the average accuracy of predicting all slot assignments for a turn correctly. For non-categorical slots a fuzzy matching score is used.
<<<Performance on other datasets>>>
We evaluate our model on public datasets WOZ2.0, MultiWOZ 2.0 and the updated MultiWOZ 2.1 BIBREF5. As results in Table TABREF37 show, our model performs competitively on all these datasets. Furthermore, we obtain state-of-the-art joint goal accuracies of 0.516 on MultiWOZ 2.0 and 0.489 on MultiWOZ 2.1 test sets respectively, exceeding the best-known results of 0.486 and 0.456 on these datasets as reported in BIBREF5.
<<</Performance on other datasets>>>
<<<Performance on SGD>>>
The model performs well for Active Intent Accuracy and Requested Slots F1 across both seen and unseen services, shown in Table TABREF37. For joint goal and average goal accuracy, the model performs better on seen services compared to unseen ones (Figure FIGREF38). The main reason for this performance difference is a significantly higher OOV rate for slot values of unseen services.
<<</Performance on SGD>>>
<<<Performance on different domains (SGD)>>>
The model performance also varies across various domains. The performance for the different domains is shown in (Table TABREF39) below. We observe that one of the factors affecting the performance across domains is still the presence of the service in the training data (seen services). Among the seen services, those in the `Events' domain have a very low OOV rate for slot values and the largest number of training examples which might be contributing to the high joint goal accuracy. For unseen services, we notice that the `Services' domain has a lower joint goal accuracy because of higher OOV rate and higher average turns per dialogue. For `Services' and `Flights' domains, the difference between joint goal accuracy and average accuracy indicates a possible skew in performance across slots where the performance on a few of the slots is much worse compared to all the other slots, thus considerably degrading the joint goal accuracy. The `RideSharing' domain also exhibits poor performance, since it possesses the largest number of the possible slot values across the dataset. We also notice that for categorical slots, with similar slot values (e.g. “Psychologist" and “Psychiatrist"), there is a very weak signal for the model to distinguish between the different classes, resulting in inferior performance.
<<</Performance on different domains (SGD)>>>
<<</Evaluation>>>
<<</Zero-Shot Dialogue State Tracking>>>
<<<Discussion>>>
It is often argued that simulation-based data collection does not yield natural dialogues or sufficient coverage, when compared to other approaches such as Wizard-of-Oz. We argue that simulation-based collection is a better alternative for collecting datasets like this owing to the factors below.
Fewer Annotation Errors: All annotations are automatically generated, so these errors are rare. In contrast, BIBREF5 reported annotation errors in 40% of turns in MultiWOZ 2.0 which utilized a Wizard-of-Oz setup.
Simpler Task: The crowd worker task of paraphrasing a readable utterance for each turn is simple. The error-prone annotation task requiring skilled workers is not needed.
Low Cost: The simplicity of the crowd worker task and lack of an annotation task greatly cut data collection costs.
Better Coverage: A wide variety of dialogue flows can be collected and specific usecases can be targeted.
<<</Discussion>>>
<<<Conclusions>>>
We presented the Schema-Guided Dialogue dataset to encourage scalable modeling approaches for virtual assistants. We also introduced the schema-guided paradigm for task-oriented dialogue that simplifies the integration of new services and APIs with large scale virtual assistants. Building upon this paradigm, we present a scalable zero-shot dialogue state tracking model achieving state-of-the-art results.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Introduction, The Schema-Guided Approach"
],
"type": "disordered_section"
}
|
1909.05855
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Towards Scalable Multi-domain Conversational Agents: The Schema-Guided Dialogue Dataset
<<<Abstract>>>
Virtual assistants such as Google Assistant, Alexa and Siri provide a conversational interface to a large number of services and APIs spanning multiple domains. Such systems need to support an ever-increasing number of services with possibly overlapping functionality. Furthermore, some of these services have little to no training data available. Existing public datasets for task-oriented dialogue do not sufficiently capture these challenges since they cover few domains and assume a single static ontology per domain. In this work, we introduce the the Schema-Guided Dialogue (SGD) dataset, containing over 16k multi-domain conversations spanning 16 domains. Our dataset exceeds the existing task-oriented dialogue corpora in scale, while also highlighting the challenges associated with building large-scale virtual assistants. It provides a challenging testbed for a number of tasks including language understanding, slot filling, dialogue state tracking and response generation. Along the same lines, we present a schema-guided paradigm for task-oriented dialogue, in which predictions are made over a dynamic set of intents and slots, provided as input, using their natural language descriptions. This allows a single dialogue system to easily support a large number of services and facilitates simple integration of new services without requiring additional training data. Building upon the proposed paradigm, we release a model for dialogue state tracking capable of zero-shot generalization to new APIs, while remaining competitive in the regular setting.
<<</Abstract>>>
<<<Introduction>>>
Virtual assistants help users accomplish tasks including but not limited to finding flights, booking restaurants and, more recently, navigating user interfaces, by providing a natural language interface to services and APIs on the web. The recent popularity of conversational interfaces and the advent of frameworks like Actions on Google and Alexa Skills, which allow developers to easily add support for new services, has resulted in a major increase in the number of application domains and individual services that assistants need to support, following the pattern of smartphone applications.
Consequently, recent work has focused on scalable dialogue systems that can handle tasks across multiple application domains. Data-driven deep learning based approaches for multi-domain modeling have shown promise, both for end-to-end and modular systems involving dialogue state tracking and policy learning. This line of work has been facilitated by the release of multi-domain dialogue corpora such as MultiWOZ BIBREF0, M2M BIBREF1 and FRAMES BIBREF2.
However, existing datasets for multi-domain task-oriented dialogue do not sufficiently capture a number of challenges that arise with scaling virtual assistants in production. These assistants need to support a large BIBREF3, constantly increasing number of services over a large number of domains. In comparison, existing public datasets cover few domains. Furthermore, they define a single static API per domain, whereas multiple services with overlapping functionality, but heterogeneous interfaces, exist in the real world.
To highlight these challenges, we introduce the Schema-Guided Dialogue (SGD) dataset, which is, to the best of our knowledge, the largest public task-oriented dialogue corpus. It exceeds existing corpora in scale, with over 16000 dialogues in the training set spanning 26 services belonging to 16 domains (more details in Table TABREF2). Further, to adequately test the models' ability to generalize in zero-shot settings, the evaluation sets contain unseen services and domains. The dataset is designed to serve as an effective testbed for intent prediction, slot filling, state tracking and language generation, among other tasks in large-scale virtual assistants.
We also propose the schema-guided paradigm for task-oriented dialogue, advocating building a single unified dialogue model for all services and APIs. Using a service's schema as input, the model would make predictions over this dynamic set of intents and slots present in the schema. This setting enables effective sharing of knowledge among all services, by relating the semantic information in the schemas, and allows the model to handle unseen services and APIs. Under the proposed paradigm, we present a novel architecture for multi-domain dialogue state tracking. By using large pretrained models like BERT BIBREF4, our model can generalize to unseen services and is robust to API changes, while achieving state-of-the-art results on the original and updated BIBREF5 MultiWOZ datasets.
<<</Introduction>>>
<<<Related Work>>>
Task-oriented dialogue systems have constituted an active area of research for decades. The growth of this field has been consistently fueled by the development of new datasets. Initial datasets were limited to one domain, such as ATIS BIBREF6 for spoken language understanding for flights. The Dialogue State Tracking Challenges BIBREF7, BIBREF8, BIBREF9, BIBREF10 contributed to the creation of dialogue datasets with increasing complexity. Other notable related datasets include WOZ2.0 BIBREF11, FRAMES BIBREF2, M2M BIBREF1 and MultiWOZ BIBREF0. These datasets have utilized a variety of data collection techniques, falling within two broad categories:
Wizard-of-Oz This setup BIBREF12 connects two crowd workers playing the roles of the user and the system. The user is provided a goal to satisfy, and the system accesses a database of entities, which it queries as per the user's preferences. WOZ2.0, FRAMES and MultiWOZ, among others, have utilized such methods.
Machine-machine Interaction A related line of work explores simulation-based dialogue generation, where the user and system roles are simulated to generate a complete conversation flow, which can then be converted to natural language using crowd workers BIBREF1. Such a framework may be cost-effective and error-resistant since the underlying crowd worker task is simpler, and semantic annotations are obtained automatically.
As virtual assistants incorporate diverse domains, recent work has focused on zero-shot modeling BIBREF13, BIBREF14, BIBREF15, domain adaptation and transfer learning techniques BIBREF16. Deep-learning based approaches have achieved state of the art performance on dialogue state tracking tasks. Popular approaches on small-scale datasets estimate the dialogue state as a distribution over all possible slot-values BIBREF17, BIBREF11 or individually score all slot-value combinations BIBREF18, BIBREF19. Such approaches are not practical for deployment in virtual assistants operating over real-world services having a very large and dynamic set of possible values. Addressing these concerns, approaches utilizing a dynamic vocabulary of slot values have been proposed BIBREF20, BIBREF21, BIBREF22.
<<</Related Work>>>
<<<The Schema-Guided Dialogue Dataset>>>
An important goal of this work is to create a benchmark dataset highlighting the challenges associated with building large-scale virtual assistants. Table TABREF2 compares our dataset with other public datasets. Our Schema-Guided Dialogue (SGD) dataset exceeds other datasets in most of the metrics at scale. The especially larger number of domains, slots, and slot values, and the presence of multiple services per domain, are representative of these scale-related challenges. Furthermore, our evaluation sets contain many services, and consequently slots, which are not present in the training set, to help evaluate model performance on unseen services.
The 17 domains (`Alarm' domain not included in training) present in our dataset are listed in Table TABREF5. We create synthetic implementations of a total of 34 services or APIs over these domains. Our simulator framework interacts with these services to generate dialogue outlines, which are a structured representation of dialogue semantics. We then used a crowd-sourcing procedure to paraphrase these outlines to natural language utterances. Our novel crowd-sourcing procedure preserves all annotations obtained from the simulator and does not require any extra annotations after dialogue collection. In this section, we describe these steps in detail and then present analyses of the collected dataset.
<<<Services and APIs>>>
We define the schema for a service as a combination of intents and slots with additional constraints, with an example in Figure FIGREF7. We implement all services using a SQL engine. For constructing the underlying tables, we sample a set of entities from Freebase and obtain the values for slots defined in the schema from the appropriate attribute in Freebase. We decided to use Freebase to sample real-world entities instead of synthetic ones since entity attributes are often correlated (e.g, a restaurant's name is indicative of the cuisine served). Some slots like event dates/times and available ticket counts, which are not present in Freebase, are synthetically sampled.
To reflect the constraints present in real-world services and APIs, we impose a few other restrictions. First, our dataset does not expose the set of all possible slot values for some slots. Having such a list is impractical for slots like date or time because they have infinitely many possible values or for slots like movie or song names, for which new values are periodically added. Our dataset specifically identifies such slots as non-categorical and does not provide a set of all possible values for these. We also ensure that the evaluation sets have a considerable fraction of slot values not present in the training set to evaluate the models in the presence of new values. Some slots like gender, number of people, day of the week etc. are defined as categorical and we specify the set of all possible values taken by them. However, these values are not assumed to be consistent across services. E.g., different services may use (`male', `female'), (`M', `F') or (`he', `she') as possible values for gender slot.
Second, real-world services can only be invoked with a limited number of slot combinations: e.g. restaurant reservation APIs do not let the user search for restaurants by date without specifying a location. However, existing datasets simplistically allow service calls with any given combination of slot values, thus giving rise to flows unsupported by actual services or APIs. As in Figure FIGREF7, the different service calls supported by a service are listed as intents. Each intent specifies a set of required slots and the system is not allowed to call this intent without specifying values for these required slots. Each intent also lists a set of optional slots with default values, which the user can override.
<<</Services and APIs>>>
<<<Dialogue Simulator Framework>>>
The dialogue simulator interacts with the services to generate dialogue outlines. Figure FIGREF9 shows the overall architecture of our dialogue simulator framework. It consists of two agents playing the roles of the user and the system. Both agents interact with each other using a finite set of actions specified through dialogue acts over a probabilistic automaton designed to capture varied dialogue trajectories. These dialogue acts can take a slot or a slot-value pair as argument. Figure FIGREF13 shows all dialogue acts supported by the agents.
At the start of a conversation, the user agent is seeded with a scenario, which is a sequence of intents to be fulfilled. We identified over 200 distinct scenarios for the training set, each comprising up to 5 intents. For multi-domain dialogues, we also identify combinations of slots whose values may be transferred when switching intents e.g. the 'address' slot value in a restaurant service could be transferred to the 'destination' slot for a taxi service invoked right after.
The user agent then generates the dialogue acts to be output in the next turn. It may retrieve arguments i.e. slot values for some of the generated acts by accessing either the service schema or the raw SQL backend. The acts, combined with the respective parameters yield the corresponding user actions. Next, the system agent generates the next set of actions using a similar procedure. Unlike the user agent, however, the system agent has restricted access to the services (denoted by dashed line), e.g. it can only query the services by supplying values for all required slots for some service call. This helps us ensure that all generated flows are valid.
After an intent is fulfilled through a series of user and system actions, the user agent queries the scenario to proceed to the next intent. Alternatively, the system may suggest related intents e.g. reserving a table after searching for a restaurant. The simulator also allows for multiple intents to be active during a given turn. While we skip many implementation details for brevity, it is worth noting that we do not include any domain-specific constraints in the simulation automaton. All domain-specific constraints are encoded in the schema and scenario, allowing us to conveniently use the simulator across a wide variety of domains and services.
<<</Dialogue Simulator Framework>>>
<<<Dialogue Paraphrasing>>>
The dialogue paraphrasing framework converts the outlines generated by the simulator into a natural conversation. Figure FIGREF11a shows a snippet of the dialogue outline generated by the simulator, containing a sequence of user and system actions. The slot values present in these actions are in a canonical form because they obtained directly from the service. However, users may refer to these values in various different ways during the conversation, e.g., “los angeles" may be referred to as “LA" or “LAX". To introduce these natural variations in the slot values, we replace different slot values with a randomly selected variation (kept consistent across user turns in a dialogue) as shown in Figure FIGREF11b.
Next we define a set of action templates for converting each action into a utterance. A few examples of such templates are shown below. These templates are used to convert each action into a natural language utterance, and the resulting utterances for the different actions in a turn are concatenated together as shown in Figure FIGREF11c. The dialogue transformed by these steps is then sent to the crowd workers. One crowd worker is tasked with paraphrasing all utterances of a dialogue to ensure naturalness and coherence.
In our paraphrasing task, the crowd workers are instructed to exactly repeat the slot values in their paraphrases. This not only helps us verify the correctness of the paraphrases, but also lets us automatically obtain slot spans in the generated utterances by string search. This automatic slot span generation greatly reduced the annotation effort required, with little impact on dialogue naturalness, thus allowing us to collect more data with the same resources. Furthermore, it is important to note that this entire procedure preserves all other annotations obtained from the simulator including the dialogue state. Hence, no further annotation is needed.
<<</Dialogue Paraphrasing>>>
<<<Dataset Analysis>>>
With over 16000 dialogues in the training set, the Schema-Guided Dialogue dataset is the largest publicly available annotated task-oriented dialogue dataset. The annotations include the active intents and dialogue states for each user utterance and the system actions for every system utterance. We have a few other annotations like the user actions but we withhold them from the public release. These annotations enable our dataset to be used as benchmark for tasks like intent detection, dialogue state tracking, imitation learning of dialogue policy, dialogue act to text generation etc. The schemas contain semantic information about the schema and the constituent intents and slots, in the form of natural language descriptions and other details (example in Figure FIGREF7).
The single-domain dialogues in our dataset contain an average of 15.3 turns, whereas the multi-domain ones contain 23 turns on an average. These numbers are also reflected in Figure FIGREF13 showing the histogram of dialogue lengths on the training set. Table TABREF5 shows the distribution of dialogues across the different domains. We note that the dataset is largely balanced in terms of the domains and services covered, with the exception of Alarm domain, which is only present in the development set. Figure FIGREF13 shows the frequency of dialogue acts contained in the dataset. Note that all dialogue acts except INFORM, REQUEST and GOODBYE are specific to either the user or the system.
<<</Dataset Analysis>>>
<<</The Schema-Guided Dialogue Dataset>>>
<<<The Schema-Guided Approach>>>
Virtual assistants aim to support a large number of services available on the web. One possible approach is to define a large unified schema for the assistant, to which different service providers can integrate with. However, it is difficult to come up with a common schema covering all use cases. Having a common schema also complicates integration of tail services with limited developer support. We propose the schema-guided approach as an alternative to allow easy integration of new services and APIs.
Under our proposed approach, each service provides a schema listing the supported slots and intents along with their natural language descriptions (Figure FIGREF7 shows an example). These descriptions are used to obtain a semantic representation of these schema elements. The assistant employs a single unified model containing no domain or service specific parameters to make predictions conditioned on these schema elements. For example, Figure FIGREF14 shows how dialogue state representation for the same dialogue can vary for two different services. Here, the departure and arrival cities are captured by analogously functioning but differently named slots in both schemas. Furthermore, values for the number_stops and direct_only slots highlight idiosyncrasies between services interpreting the same concept.
There are many advantages to this approach. First, using a single model facilitates representation and transfer of common knowledge across related services. Second, since the model utilizes semantic representation of schema elements as input, it can interface with unseen services or APIs on which it has not been trained. Third, it is robust to changes like addition of new intents or slots to the service.
<<</The Schema-Guided Approach>>>
<<<Zero-Shot Dialogue State Tracking>>>
Models in the schema-guided setting can condition on the pertinent services' schemas using descriptions of intents and slots. These models, however, also need access to representations for potentially unseen inputs from new services. Recent pretrained models like ELMo BIBREF23 and BERT BIBREF4 can help, since they are trained on very large corpora. Building upon these, we present our zero-shot schema-guided dialogue state tracking model.
<<<Model>>>
We use a single model, shared among all services and domains, to make these predictions. We first encode all the intents, slots and slot values for categorical slots present in the schema into an embedded representation. Since different schemas can have differing numbers of intents or slots, predictions are made over dynamic sets of schema elements by conditioning them on the corresponding schema embeddings. This is in contrast to existing models which make predictions over a static schema and are hence unable to share knowledge across domains and services. They are also not robust to changes in schema and require the model to be retrained with new annotated data upon addition of a new intent, slot, or in some cases, a slot value to a service.
<<<Schema Embedding>>>
This component obtains the embedded representations of intents, slots and categorical slot values in each service schema. Table TABREF18 shows the sequence pairs used for embedding each schema element. These sequence pairs are fed to a pretrained BERT encoder shown in Figure FIGREF20 and the output $\mathbf {u}_{\texttt {CLS}}$ is used as the schema embedding.
For a given service with $I$ intents and $S$ slots, let $\lbrace \mathbf {i}_j\rbrace $, ${1 \le j \le I}$ and $\lbrace \mathbf {s}_j\rbrace $, ${1 \le j \le S}$ be the embeddings of all intents and slots respectively. As a special case, we let $\lbrace \mathbf {s}^n_j\rbrace $, ${1 \le j \le N \le S}$ denote the embeddings for the $N$ non-categorical slots in the service. Also, let $\lbrace \textbf {v}_j^k\rbrace $, $1 \le j \le V^k$ denote the embeddings for all possible values taken by the $k^{\text{th}}$ categorical slot, $1 \le k \le C$, with $C$ being the number of categorical slots and $N + C = S$. All these embeddings are collectively called schema embeddings.
<<</Schema Embedding>>>
<<<Utterance Encoding>>>
Like BIBREF24, we use BERT to encode the user utterance and the preceding system utterance to obtain utterance pair embedding $\mathbf {u} = \mathbf {u}_{\texttt {CLS}}$ and token level representations $\mathbf {t}_1, \mathbf {t}_2 \cdots \mathbf {t}_M$, $M$ being the total number of tokens in the two utterances. The utterance and schema embeddings are used together to obtain model predictions using a set of projections (defined below).
<<</Utterance Encoding>>>
<<<Projection>>>
Let $\mathbf {x}, \mathbf {y} \in \mathbb {R}^d$. For a task $K$, we define $\mathbf {l} = \mathcal {F}_K(\mathbf {x}, \mathbf {y}, p)$ as a projection transforming $\mathbf {x}$ and $\mathbf {y}$ into the vector $\mathbf {l} \in \mathbb {R}^p$ using Equations DISPLAY_FORM22-. Here, $\mathbf {h_1},\mathbf {h_2} \in \mathbb {R}^d$, $W^K_i$ and $b^K_i$ for $1 \le i \le 3$ are trainable parameters of suitable dimensions and $A$ is the activation function. We use $\texttt {gelu}$ BIBREF25 activation as in BERT.
<<</Projection>>>
<<<Active Intent>>>
For a given service, the active intent denotes the intent requested by the user and currently being fulfilled by the system. It takes the value “NONE" if no intent for the service is currently being processed. Let $\mathbf {i}_0$ be a trainable parameter in $\mathbb {R}^d$ for the “NONE" intent. We define the intent network as below.
The logits $l^{j}_{\text{int}}$ are normalized using softmax to yield a distribution over all $I$ intents and the “NONE" intent. During inference, we predict the highest probability intent as active.
<<</Active Intent>>>
<<<Requested Slots>>>
These are the slots whose values are requested by the user in the current utterance. Projection $\mathcal {F}_{\text{req}}$ predicts logit $l^j_{\text{req}}$ for the $j^{\text{th}}$ slot. Obtained logits are normalized using sigmoid to get a score in $[0,1]$. During inference, all slots with $\text{score} > 0.5$ are predicted as requested.
<<</Requested Slots>>>
<<<User Goal>>>
We define the user goal as the user constraints specified over the dialogue context till the current user utterance. Instead of predicting the entire user goal after each user utterance, we predict the difference between the user goal for the current turn and preceding user turn. During inference, the predicted user goal updates are accumulated to yield the predicted user goal. We predict the user goal updates in two stages. First, for each slot, a distribution of size 3 denoting the slot status and taking values none, dontcare and active is obtained by normalizing the logits obtained in equation DISPLAY_FORM28 using softmax. If the status of a slot is predicted to be none, its assigned value is assumed to be unchanged. If the prediction is dontcare, then the special dontcare value is assigned to it. Otherwise, a slot value is predicted and assigned to it in the second stage.
In the second stage, equation is used to obtain a logit for each value taken by each categorical slot. Logits for a given categorical slot are normalized using softmax to get a distribution over all possible values. The value with the maximum mass is assigned to the slot. For each non-categorical slot, logits obtained using equations and are normalized using softmax to yield two distributions over all tokens. These two distributions respectively correspond to the start and end index of the span corresponding to the slot. The indices $p \le q$ maximizing $start[p] + end[q]$ are predicted to be the span boundary and the corresponding value is assigned to the slot.
<<</User Goal>>>
<<</Model>>>
<<<Evaluation>>>
We consider the following metrics for evaluation of the dialogue state tracking task:
Active Intent Accuracy: The fraction of user turns for which the active intent has been correctly predicted.
Requested Slot F1: The macro-averaged F1 score for requested slots over all eligible turns. Turns with no requested slots in ground truth and predictions are skipped.
Average Goal Accuracy: For each turn, we predict a single value for each slot present in the dialogue state. The slots which have a non-empty assignment in the ground truth dialogue state are considered for accuracy. This is the average accuracy of predicting the value of a slot correctly. A fuzzy matching score is used for non-categorical slots to reward partial matches with the ground truth.
Joint Goal Accuracy: This is the average accuracy of predicting all slot assignments for a turn correctly. For non-categorical slots a fuzzy matching score is used.
<<<Performance on other datasets>>>
We evaluate our model on public datasets WOZ2.0, MultiWOZ 2.0 and the updated MultiWOZ 2.1 BIBREF5. As results in Table TABREF37 show, our model performs competitively on all these datasets. Furthermore, we obtain state-of-the-art joint goal accuracies of 0.516 on MultiWOZ 2.0 and 0.489 on MultiWOZ 2.1 test sets respectively, exceeding the best-known results of 0.486 and 0.456 on these datasets as reported in BIBREF5.
<<</Performance on other datasets>>>
<<<Performance on SGD>>>
The model performs well for Active Intent Accuracy and Requested Slots F1 across both seen and unseen services, shown in Table TABREF37. For joint goal and average goal accuracy, the model performs better on seen services compared to unseen ones (Figure FIGREF38). The main reason for this performance difference is a significantly higher OOV rate for slot values of unseen services.
<<</Performance on SGD>>>
<<<Performance on different domains (SGD)>>>
The model performance also varies across various domains. The performance for the different domains is shown in (Table TABREF39) below. We observe that one of the factors affecting the performance across domains is still the presence of the service in the training data (seen services). Among the seen services, those in the `Events' domain have a very low OOV rate for slot values and the largest number of training examples which might be contributing to the high joint goal accuracy. For unseen services, we notice that the `Services' domain has a lower joint goal accuracy because of higher OOV rate and higher average turns per dialogue. For `Services' and `Flights' domains, the difference between joint goal accuracy and average accuracy indicates a possible skew in performance across slots where the performance on a few of the slots is much worse compared to all the other slots, thus considerably degrading the joint goal accuracy. The `RideSharing' domain also exhibits poor performance, since it possesses the largest number of the possible slot values across the dataset. We also notice that for categorical slots, with similar slot values (e.g. “Psychologist" and “Psychiatrist"), there is a very weak signal for the model to distinguish between the different classes, resulting in inferior performance.
<<</Performance on different domains (SGD)>>>
<<</Evaluation>>>
<<</Zero-Shot Dialogue State Tracking>>>
<<<Discussion>>>
It is often argued that simulation-based data collection does not yield natural dialogues or sufficient coverage, when compared to other approaches such as Wizard-of-Oz. We argue that simulation-based collection is a better alternative for collecting datasets like this owing to the factors below.
Fewer Annotation Errors: All annotations are automatically generated, so these errors are rare. In contrast, BIBREF5 reported annotation errors in 40% of turns in MultiWOZ 2.0 which utilized a Wizard-of-Oz setup.
Simpler Task: The crowd worker task of paraphrasing a readable utterance for each turn is simple. The error-prone annotation task requiring skilled workers is not needed.
Low Cost: The simplicity of the crowd worker task and lack of an annotation task greatly cut data collection costs.
Better Coverage: A wide variety of dialogue flows can be collected and specific usecases can be targeted.
<<</Discussion>>>
<<<Conclusions>>>
We presented the Schema-Guided Dialogue dataset to encourage scalable modeling approaches for virtual assistants. We also introduced the schema-guided paradigm for task-oriented dialogue that simplifies the integration of new services and APIs with large scale virtual assistants. Building upon this paradigm, we present a scalable zero-shot dialogue state tracking model achieving state-of-the-art results.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Introduction, The Schema-Guided Dialogue Dataset"
],
"type": "disordered_section"
}
|
2004.04696
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
BLEURT: Learning Robust Metrics for Text Generation
<<<Abstract>>>
Text generation has made significant advances in the last few years. Yet, evaluation metrics have lagged behind, as the most popular choices (e.g., BLEU and ROUGE) may correlate poorly with human judgments. We propose BLEURT, a learned evaluation metric based on BERT that can model human judgments with a few thousand possibly biased training examples. A key aspect of our approach is a novel pre-training scheme that uses millions of synthetic examples to help the model generalize. BLEURT provides state-of-the-art results on the last three years of the WMT Metrics shared task and the WebNLG Competition dataset. In contrast to a vanilla BERT-based approach, it yields superior results even when the training data is scarce and out-of-distribution.
<<</Abstract>>>
<<<Introduction>>>
In the last few years, research in natural text generation (NLG) has made significant progress, driven largely by the neural encoder-decoder paradigm BIBREF0, BIBREF1 which can tackle a wide array of tasks including translation BIBREF2, summarization BIBREF3, BIBREF4, structured-data-to-text generation BIBREF5, BIBREF6, BIBREF7 dialog BIBREF8, BIBREF9 and image captioning BIBREF10. However, progress is increasingly impeded by the shortcomings of existing metrics BIBREF7, BIBREF11, BIBREF12.
Human evaluation is often the best indicator of the quality of a system. However, designing crowd sourcing experiments is an expensive and high-latency process, which does not easily fit in a daily model development pipeline. Therefore, NLG researchers commonly use automatic evaluation metrics, which provide an acceptable proxy for quality and are very cheap to compute. This paper investigates sentence-level, reference-based metrics, which describe the extent to which a candidate sentence is similar to a reference one. The exact definition of similarity may range from string overlap to logical entailment.
The first generation of metrics relied on handcrafted rules that measure the surface similarity between the sentences. To illustrate, BLEU BIBREF13 and ROUGE BIBREF14, two popular metrics, rely on N-gram overlap. Because those metrics are only sensitive to lexical variation, they cannot appropriately reward semantic or syntactic variations of a given reference. Thus, they have been repeatedly shown to correlate poorly with human judgment, in particular when all the systems to compare have a similar level of accuracy BIBREF15, BIBREF16, BIBREF17.
Increasingly, NLG researchers have addressed those problems by injecting learned components in their metrics. To illustrate, consider the WMT Metrics Shared Task, an annual benchmark in which translation metrics are compared on their ability to imitate human assessments. The last two years of the competition were largely dominated by neural net-based approaches, RUSE, YiSi and ESIM BIBREF18, BIBREF11. Current approaches largely fall into two categories. Fully learned metrics, such as BEER, RUSE, and ESIM are trained end-to-end, and they typically rely on handcrafted features and/or learned embeddings. Conversely, hybrid metrics, such as YiSi and BERTscore combine trained elements, e.g., contextual embeddings, with handwritten logic, e.g., as token alignment rules. The first category typically offers great expressivity: if a training set of human ratings data is available, the metrics may take full advantage of it and fit the ratings distribution tightly. Furthermore, learned metrics can be tuned to measure task-specific properties, such as fluency, faithfulness, grammatically, or style. On the other hand, hybrid metrics offer robustness. They may provide better results when there is little to no training data, and they do not rely on the assumption that training and test data are identically distributed.
And indeed, the iid assumption is particularly problematic in NLG evaluation because of domain drifts, that have been the main target of the metrics literature, but also because of quality drifts: NLG systems tend to get better over time, and therefore a model trained on ratings data from 2015 may fail to distinguish top performing systems in 2019, especially for newer research tasks. An ideal learned metric would be able to both take full advantage of available ratings data for training, and be robust to distribution drifts, i.e., it should be able to extrapolate.
Our insight is that it is possible to combine expressivity and robustness by pre-training a fully learned metric on large amounts of synthetic data, before fine-tuning it on human ratings. To this end, we introduce Bleurt, a text generation metric based on BERT BIBREF19. A key ingredient of Bleurt is a novel pre-training scheme, which uses random perturbations of Wikipedia sentences augmented with a diverse set of lexical and semantic-level supervision signals.
To demonstrate our approach, we train Bleurt for English and evaluate it under different generalization regimes. We first verify that it provides state-of-the-art results on all recent years of the WMT Metrics Shared task (2017 to 2019, to-English language pairs). We then stress-test its ability to cope with quality drifts with a synthetic benchmark based on WMT 2017. Finally, we show that it can easily adapt to a different domain with three tasks from a data-to-text dataset, WebNLG 2017 BIBREF20. Ablations show that our synthetic pretraining scheme increases performance in the iid setting, and is critical to ensure robustness when the training data is scarce, skewed, or out-of-domain.
<<</Introduction>>>
<<<Preliminaries>>>
Define $= (x_1,..,x_{r})$ to be the reference sentence of length $r$ where each $x_i$ is a token and let $\tilde{} = (\tilde{x}_1,..,\tilde{x}_{p})$ be a prediction sentence of length $p$. Let $\lbrace (_i, \tilde{}_i, y_i)\rbrace _{n=1}^{N}$ be a training dataset of size $N$ where $y_i \in [0, 1]$ is the human rating that indicates how good $\tilde{}_i$ is with respect to $_i$. Given the training data, our goal is to learn a function $: (, \tilde{}) \rightarrow y$ that predicts the human rating.
<<</Preliminaries>>>
<<<Fine-Tuning BERT for Quality Evaluation>>>
Given the small amounts of rating data available, it is natural to leverage unsupervised representations for this task. In our model, we use BERT (Bidirectional Encoder Representations from Transformers) BIBREF19, which is an unsupervised technique that learns contextualized representations of sequences of text. Given $$ and $\tilde{}$, BERT is a Transformer BIBREF21 that returns a sequence of contextualized vectors:
where $_{\mathrm {[CLS]}}$ is the representation for the special $\mathrm {[CLS]}$ token. As described by devlin2018bert, we add a linear layer on top of the $\mathrm {[CLS]}$ vector to predict the rating:
where $$ and $$ are the weight matrix and bias vector respectively. Both the above linear layer as well as the BERT parameters are trained (i.e. fine-tuned) on the supervised data which typically numbers in a few thousand examples. We use the regression loss $\ell _{\textrm {supervised}} = \frac{1}{N} \sum _{n=1}^{N} \Vert y_i - \hat{y} \Vert ^2 $.
Although this approach is quite straightforward, we will show in Section SECREF5 that it gives state-of-the-art results on WMT Metrics Shared Task 17-19, which makes it a high-performing evaluation metric. However, fine-tuning BERT requires a sizable amount of iid data, which is less than ideal for a metric that should generalize to a variety of tasks and model drift.
<<</Fine-Tuning BERT for Quality Evaluation>>>
<<<Pre-Training on Synthetic Data>>>
The key aspect of our approach is a pre-training technique that we use to “warm up” BERT before fine-tuning on rating data. We generate a large number of of synthetic reference-candidate pairs $(, \tilde{})$, and we train BERT on several lexical- and semantic-level supervision signals with a multitask loss. As our experiments will show, Bleurt generalizes much better after this phase, especially with incomplete training data.
Any pre-training approach requires a dataset and a set of pre-training tasks. Ideally, the setup should resemble the final NLG evaluation task, i.e., the sentence pairs should be distributed similarly and the pre-training signals should correlate with human ratings. Unfortunately, we cannot have access to the NLG models that we will evaluate in the future. Therefore, we optimized our scheme for generality, with three requirements. (1) The set of reference sentences should be large and diverse, so that Bleurt can cope with a wide range of NLG domains and tasks. (2) The sentence pairs should contain a wide variety of lexical, syntactic, and semantic dissimilarities. The aim here is to anticipate all variations that an NLG system may produce, e.g., phrase substitution, paraphrases, noise, or omissions. (3) The pre-training objectives should effectively capture those phenomena, so that Bleurt can learn to identify them. The following sections present our approach.
<<<Generating Sentence Pairs>>>
One way to expose Bleurt to a wide variety of sentence differences is to use existing sentence pairs datasets BIBREF22, BIBREF23, BIBREF24. These sets are a rich source of related sentences, but they may fail to capture the errors and alterations that NLG systems produce (e.g., omissions, repetitions, nonsensical substitutions). We opted for an automatic approach instead, that can be scaled arbitrarily and at little cost: we generate synthetic sentence pairs $(, \tilde{})$ by randomly perturbing 1.8 million segments $$ from Wikipedia. We use three techniques: mask-filling with BERT, backtranslation, and randomly dropping out words. We obtain about 6.5 million perturbations $\tilde{}$. Let us describe those techniques.
<<<Mask-filling with BERT:>>>
BERT's initial training task is to fill gaps (i.e., masked tokens) in tokenized sentences. We leverage this functionality by inserting masks at random positions in the Wikipedia sentences, and fill them with the language model. Thus, we introduce lexical alterations while maintaining the fluency of the sentence. We use two masking strategies—we either introduce the masks at random positions in the sentences, or we create contiguous sequences of masked tokens. More details are provided in the Appendix.
<<</Mask-filling with BERT:>>>
<<<Backtranslation:>>>
We generate paraphrases and perturbations with backtranslation, that is, round trips from English to another language and then back to English with a translation model BIBREF25, BIBREF26, BIBREF27. Our primary aim is to create variants of the reference sentence that preserves semantics. Additionally, we use the mispredictions of the backtranslation models as a source of realistic alterations.
<<</Backtranslation:>>>
<<<Dropping words:>>>
We found it useful in our experiments to randomly drop words from the synthetic examples above to create other examples. This method prepares Bleurt for “pathological” behaviors or NLG systems, e.g., void predictions, or sentence truncation.
<<</Dropping words:>>>
<<</Generating Sentence Pairs>>>
<<<Pre-Training Signals>>>
The next step is to augment each sentence pair $(, \tilde{})$ with a set of pre-training signals $\lbrace {\tau }_k\rbrace $, where ${\tau }_k$ is the target vector of pre-training task $k$. Good pre-training signals should capture a wide variety of lexical and semantic differences. They should also be cheap to obtain, so that the approach can scale to large amounts of synthetic data. The following section presents our 9 pre-training tasks, summarized in Table TABREF3. Additional implementation details are in the Appendix.
<<<Automatic Metrics:>>>
We create three signals ${\tau _{\text{BLEU}}}$, ${\tau _{\text{ROUGE}}}$, and ${\tau _{\text{BERTscore}}}$ with sentence BLEU BIBREF13, ROUGE BIBREF14, and BERTscore BIBREF28 respectively (we use precision, recall and F-score for the latter two).
<<</Automatic Metrics:>>>
<<<Backtranslation Likelihood:>>>
The idea behind this signal is to leverage existing translation models to measure semantic equivalence. Given a pair $(, \tilde{})$, this training signal measures the probability that $\tilde{}$ is a backtranslation of $$, $P(\tilde{} | )$, normalized by the length of $\tilde{}$. Let $P_{\texttt {en}\rightarrow \texttt {fr}}(_{\texttt {fr}} | )$ be a translation model that assigns probabilities to French sentences $_{\texttt {fr}}$ conditioned on English sentences $$ and let $P_{\texttt {fr}\rightarrow \texttt {en}}(| _{\texttt {fr}})$ be a translation model that assigns probabilities to English sentences given french sentences. If $|\tilde{}|$ is the number of tokens in $\tilde{}$, we define our score as $ {\tau }_{\text{en-fr}, \tilde{} \mid } = \frac{\log P(\tilde{} | )}{|\tilde{}|}$, with:
Because computing the summation over all possible French sentences is intractable, we approximate the sum using $_{\texttt {fr}}^\ast = P_{\texttt {en}\rightarrow \texttt {fr}} (_{\texttt {fr}} | )$ and we assume that $P_{\texttt {en}\rightarrow \texttt {fr}}(_{\texttt {fr}}^\ast | ) \approx 1$:
We can trivially reverse the procedure to compute $P(| \tilde{})$, thus we create 4 pre-training signals ${\tau }_{\text{en-fr}, \mid \tilde{}}$, ${\tau }_{\text{en-fr}, \tilde{} \mid }$, ${\tau }_{\text{en-de}, \mid \tilde{}}$, ${\tau }_{\text{en-de}, \tilde{} \mid }$ with two pairs of languages ($\texttt {en}\leftrightarrow \texttt {de}$ and $\texttt {en}\leftrightarrow \texttt {fr}$) in both directions.
<<</Backtranslation Likelihood:>>>
<<<Textual Entailment:>>>
The signal ${\tau }_\text{entail}$ expresses whether $$ entails or contradicts $\tilde{}$ using a classifier. We report the probability of three labels: Entail, Contradict, and Neutral, using BERT fine-tuned on an entailment dataset, MNLI BIBREF19, BIBREF23.
<<</Textual Entailment:>>>
<<<Backtranslation flag:>>>
The signal ${\tau }_\text{backtran\_flag}$ is a Boolean that indicates whether the perturbation was generated with backtranslation or with mask-filling.
<<</Backtranslation flag:>>>
<<</Pre-Training Signals>>>
<<<Modeling>>>
For each pre-training task, our model uses either a regression or a classification loss. We then aggregate the task-level losses with a weighted sum.
Let ${\tau }_k$ describe the target vector for each task, e.g., the probabilities for the classes Entail, Contradict, Neutral, or the precision, recall, and F-score for ROUGE. If ${\tau }_k$ is a regression task, then the loss used is the $\ell _2$ loss i.e. $\ell _k = \Vert {\tau }_k - \hat{{\tau }}_k \Vert _2^2 / |{\tau }_k|$ where $|{\tau }_k|$ is the dimension of ${\tau }_k$ and $\hat{{\tau }}_k$ is computed by using a task-specific linear layer on top of the $\textrm {[CLS]}$ embedding: $\hat{{\tau }}_k = _{\tau _k} \tilde{}_{\textrm {[CLS]}} + _{\tau _k}$. If ${\tau }_k$ is a classification task, we use a separate linear layer to predict a logit for each class $c$: $\hat{{\tau }}_{kc} = _{\tau _{kc}} \tilde{}_{\textrm {[CLS]}} + _{\tau _{kc}}$, and we use the multiclass cross-entropy loss. We define our aggregate pre-training loss function as follows: pre-training = 1M m=1M k=1K k k(km, km) where ${\tau }_k^m$ is the target vector for example $m$, $M$ is number of synthetic examples, and $\gamma _k$ are hyperparameter weights obtained with grid search (more details in the Appendix).
<<</Modeling>>>
<<</Pre-Training on Synthetic Data>>>
<<<Experiments>>>
In this section, we report our experimental results for two tasks, translation and data-to-text. First, we benchmark Bleurt against existing text generation metrics on the last 3 years of the WMT Metrics Shared Task BIBREF29. We then evaluate its robustness to quality drifts with a series of synthetic datasets based on WMT17. We test Bleurt's ability to adapt to different tasks with the WebNLG 2017 Challenge Dataset BIBREF20. Finally, we measure the contribution of each pre-training task with ablation experiments.
<<<Our Models:>>>
Unless specified otherwise, all Bleurt models are trained in three steps: regular BERT pre-training BIBREF19, pre-training on synthetic data (as explained in Section SECREF4), and fine-tuning on task-specific ratings (translation and/or data-to-text). We experiment with two versions of Bleurt, BLEURT and BLEURTbase, respectively based on BERT-Large (24 layers, 1024 hidden units, 16 heads) and BERT-Base (12 layers, 768 hidden units, 12 heads) BIBREF19, both uncased. We use batch size 32, learning rate 1e-5, and 800,000 steps for pre-training and 40,000 steps for fine-tuning. We provide the full detail of our training setup in the Appendix.
<<</Our Models:>>>
<<<WMT Metrics Shared Task>>>
<<<Datasets and Metrics:>>>
We use years 2017 to 2019 of the WMT Metrics Shared Task, to-English language pairs. For each year, we used the official WMT test set, which include several thousand pairs of sentences with human ratings from the news domain. The training sets contain 5,360, 9,492, and 147,691 records for each year. The test sets for years 2018 and 2019 are noisier, as reported by the organizers and shown by the overall lower correlations.
We evaluate the agreement between the automatic metrics and the human ratings. For each year, we report two metrics: Kendall's Tau $\tau $ (for consistency across experiments), and the official WMT metric for that year (for completeness). The official WMT metric is either Pearson's correlation or a robust variant of Kendall's Tau called DARR, described in the Appendix. All the numbers come from our own implementation of the benchmark. Our results are globally consistent with the official results but we report small differences in 2018 and 2019, marked in the tables.
<<</Datasets and Metrics:>>>
<<<Models:>>>
We experiment with four versions of Bleurt: BLEURT, BLEURTbase, BLEURT -pre and BLEURTbase -pre. The first two models are based on BERT-large and BERT-base. In the latter two versions, we skip the pre-training phase and fine-tune directly on the WMT ratings. For each year of the WMT shared task, we use the test set from the previous years for training and validation. We describe our setup in further detail in the Appendix. We compare Bleurt to participant data from the shared task and automatic metrics that we ran ourselves. In the former case, we use the the best-performing contestants for each year, that is, chrF++, BEER, Meteor++, RUSE, Yisi1, ESIM and Yisi1-SRL BIBREF30. All the contestants use the same WMT training data, in addition to existing sentence or token embeddings. In the latter case, we use Moses sentenceBLEU, BERTscore BIBREF28, and MoverScore BIBREF31. For BERTscore, we use BERT-large uncased for fairness, and roBERTa (the recommended version) for completeness BIBREF32. We run MoverScore on WMT 2017 using the scripts published by the authors.
<<</Models:>>>
<<<Results:>>>
Tables TABREF14, TABREF15, TABREF16 show the results. For years 2017 and 2018, a Bleurt-based metric dominates the benchmark for each language pair (Tables TABREF14 and TABREF15). BLEURT and BLEURTbase are also competitive for year 2019: they yield the best results for every language pair on Kendall's Tau, and they come first for 4 out of 7 pairs on DARR. As expected, BLEURT dominates BLEURTbase in the majority of cases. Pre-training consistently improves the results of BLEURT and BLEURTbase. We observe the largest effect on year 2017, where it adds up to 7.4 Kendall Tau points for BLEURTbase (zh-en). The effect is milder on years 2018 and 2019, up to 2.1 points (tr-en, 2018). We explain the difference by the fact that the training data used for 2017 is smaller than the datasets used for the following years, so pre-training is likelier to help. In general pre-training yields higher returns for BERT-base than for BERT-large—in fact, BLEURTbase with pre-training is often better than BLEURT without.
Takeaways: Pre-training delivers consistent improvements, especially for BERT-base. Bleurt yields state-of-the art performance for all years of the WMT Metrics Shared task.
<<</Results:>>>
<<</WMT Metrics Shared Task>>>
<<<Robustness to Quality Drift>>>
We assess our claim that pre-training makes Bleurt robust to quality drifts, by constructing a series of tasks for which it is increasingly pressured to extrapolate. All the experiments that follow are based on the WMT Metrics Shared Task 2017, because the ratings for this edition are particularly reliable.
<<<Methodology:>>>
We create increasingly challenging datasets by sub-sampling the records from the WMT Metrics shared task, keeping low-rated translations for training and high-rated translations for test. The key parameter is the skew factor $\alpha $, that measures how much the training data is left-skewed and the test data is right-skewed. Figure FIGREF24 demonstrates the ratings distribution that we used in our experiments. The training data shrinks as $\alpha $ increases: in the most extreme case ($\alpha =3.0$), we use only 11.9% of the original 5,344 training records. We give the full detail of our sampling methodology in the Appendix.
We use BLEURT with and without pre-training and we compare to Moses sentBLEU and BERTscore. We use BERT-large uncased for both BLEURT and BERTscore.
<<</Methodology:>>>
<<<Takeaways:>>>
Pre-training makes BLEURT significantly more robust to quality drifts.
<<</Takeaways:>>>
<<</Robustness to Quality Drift>>>
<<<WebNLG Experiments>>>
In this section, we evaluate Bleurt's performance on three tasks from a data-to-text dataset, the WebNLG Challenge 2017 BIBREF33. The aim is to assess Bleurt's capacity to adapt to new tasks with limited training data.
<<<Dataset and Evaluation Tasks:>>>
The WebNLG challenge benchmarks systems that produce natural language description of entities (e.g., buildings, cities, artists) from sets of 1 to 5 RDF triples. The organizers released the human assessments for 9 systems over 223 inputs, that is, 4,677 sentence pairs in total (we removed null values). Each input comes with 1 to 3 reference descriptions. The submissions are evaluated on 3 aspects: semantics, grammar, and fluency. We treat each type of rating as a separate modeling task. The data has no natural split between train and test, therefore we experiment with several schemes. We allocate 0% to about 50% of the data to training, and we split on both the evaluated systems or the RDF inputs in order to test different generalization regimes.
<<</Dataset and Evaluation Tasks:>>>
<<<Systems and Baselines:>>>
BLEURT -pre -wmt, is a public BERT-large uncased checkpoint directly trained on the WebNLG ratings. BLEURT -wmtwas first pre-trained on synthetic data, then fine-tuned on WebNLG data. BLEURT was trained in three steps: first on synthetic data, then on WMT data (16-18), and finally on WebNLG data. When a record comes with several references, we run BLEURT on each reference and report the highest value BIBREF28.
We report four baselines: BLEU, TER, Meteor, and BERTscore. The first three were computed by the WebNLG competition organizers. We ran the latter one ourselves, using BERT-large uncased for a fair comparison.
<<</Systems and Baselines:>>>
<<</WebNLG Experiments>>>
<<<Ablation Experiments>>>
Figure FIGREF36 presents our ablation experiments on WMT 2017, which highlight the relative importance of each pre-training task. On the left side, we compare Bleurt pre-trained on a single task to Bleurt without pre-training. On the right side, we compare full Bleurt to Bleurt pre-trained on all tasks except one. Pre-training on BERTscore, entailment, and the backtranslation scores yield improvements (symmetrically, ablating them degrades Bleurt). Oppositely, BLEU and ROUGE have a negative impact. We conclude that pre-training on high quality signals helps BLEURT, but that metrics that correlate less well with human judgment may in fact harm the model.
<<</Ablation Experiments>>>
<<</Experiments>>>
<<<Related Work>>>
The WMT shared metrics competition BIBREF34, BIBREF18, BIBREF11 has inspired the creation of many learned metrics, some of which use regression or deep learning BIBREF35, BIBREF36, BIBREF37, BIBREF38, BIBREF30. Other metrics have been introduced, such as the recent MoverScore BIBREF31 which combines contextual embeddings and Earth Mover's Distance. We provide a head-to-head comparison with the best performing of those in our experiments. Other approaches do not attempt to estimate quality directly, but use information extraction or question answering as a proxy BIBREF7, BIBREF39, BIBREF40. Those are complementary to our work.
There has been recent work that uses BERT for evaluation. BERTScore BIBREF28 proposes replacing the hard n-gram overlap of BLEU with a soft-overlap using BERT embeddings. We use it in all our experiments. Bertr BIBREF30 and YiSi BIBREF30 also make use of BERT embeddings to compute a similarity score. Sum-QE BIBREF41 fine-tunes BERT for quality estimation as we describe in Section SECREF3. Our focus is different—we train metrics that are not only state-of-the-art in conventional iid experimental setups, but also robust in the presence of scarce and out-of-distribution training data. To our knowledge no existing work has explored pre-training and extrapolation in the context of NLG.
Noisy pre-training has been proposed before for other tasks such as paraphrasing BIBREF42, BIBREF43 but generally not with synthetic data. Generating synthetic data via paraphrases and perturbations has been commonly used for generating adversarial examples BIBREF44, BIBREF45, BIBREF46, BIBREF47, an orthogonal line of research.
<<</Related Work>>>
<<<Conclusion>>>
We presented Bleurt, a reference-based text generation metric for English. Because the metric is trained end-to-end, Bleurt can model human assessment with superior accuracy. Furthermore, pre-training makes the metrics robust particularly robust to both domain and quality drifts. Future research directions include multilingual NLG evaluation, and hybrid methods involving both humans and classifiers.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Conclusion, Preliminaries"
],
"type": "disordered_section"
}
|
2004.04696
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
BLEURT: Learning Robust Metrics for Text Generation
<<<Abstract>>>
Text generation has made significant advances in the last few years. Yet, evaluation metrics have lagged behind, as the most popular choices (e.g., BLEU and ROUGE) may correlate poorly with human judgments. We propose BLEURT, a learned evaluation metric based on BERT that can model human judgments with a few thousand possibly biased training examples. A key aspect of our approach is a novel pre-training scheme that uses millions of synthetic examples to help the model generalize. BLEURT provides state-of-the-art results on the last three years of the WMT Metrics shared task and the WebNLG Competition dataset. In contrast to a vanilla BERT-based approach, it yields superior results even when the training data is scarce and out-of-distribution.
<<</Abstract>>>
<<<Introduction>>>
In the last few years, research in natural text generation (NLG) has made significant progress, driven largely by the neural encoder-decoder paradigm BIBREF0, BIBREF1 which can tackle a wide array of tasks including translation BIBREF2, summarization BIBREF3, BIBREF4, structured-data-to-text generation BIBREF5, BIBREF6, BIBREF7 dialog BIBREF8, BIBREF9 and image captioning BIBREF10. However, progress is increasingly impeded by the shortcomings of existing metrics BIBREF7, BIBREF11, BIBREF12.
Human evaluation is often the best indicator of the quality of a system. However, designing crowd sourcing experiments is an expensive and high-latency process, which does not easily fit in a daily model development pipeline. Therefore, NLG researchers commonly use automatic evaluation metrics, which provide an acceptable proxy for quality and are very cheap to compute. This paper investigates sentence-level, reference-based metrics, which describe the extent to which a candidate sentence is similar to a reference one. The exact definition of similarity may range from string overlap to logical entailment.
The first generation of metrics relied on handcrafted rules that measure the surface similarity between the sentences. To illustrate, BLEU BIBREF13 and ROUGE BIBREF14, two popular metrics, rely on N-gram overlap. Because those metrics are only sensitive to lexical variation, they cannot appropriately reward semantic or syntactic variations of a given reference. Thus, they have been repeatedly shown to correlate poorly with human judgment, in particular when all the systems to compare have a similar level of accuracy BIBREF15, BIBREF16, BIBREF17.
Increasingly, NLG researchers have addressed those problems by injecting learned components in their metrics. To illustrate, consider the WMT Metrics Shared Task, an annual benchmark in which translation metrics are compared on their ability to imitate human assessments. The last two years of the competition were largely dominated by neural net-based approaches, RUSE, YiSi and ESIM BIBREF18, BIBREF11. Current approaches largely fall into two categories. Fully learned metrics, such as BEER, RUSE, and ESIM are trained end-to-end, and they typically rely on handcrafted features and/or learned embeddings. Conversely, hybrid metrics, such as YiSi and BERTscore combine trained elements, e.g., contextual embeddings, with handwritten logic, e.g., as token alignment rules. The first category typically offers great expressivity: if a training set of human ratings data is available, the metrics may take full advantage of it and fit the ratings distribution tightly. Furthermore, learned metrics can be tuned to measure task-specific properties, such as fluency, faithfulness, grammatically, or style. On the other hand, hybrid metrics offer robustness. They may provide better results when there is little to no training data, and they do not rely on the assumption that training and test data are identically distributed.
And indeed, the iid assumption is particularly problematic in NLG evaluation because of domain drifts, that have been the main target of the metrics literature, but also because of quality drifts: NLG systems tend to get better over time, and therefore a model trained on ratings data from 2015 may fail to distinguish top performing systems in 2019, especially for newer research tasks. An ideal learned metric would be able to both take full advantage of available ratings data for training, and be robust to distribution drifts, i.e., it should be able to extrapolate.
Our insight is that it is possible to combine expressivity and robustness by pre-training a fully learned metric on large amounts of synthetic data, before fine-tuning it on human ratings. To this end, we introduce Bleurt, a text generation metric based on BERT BIBREF19. A key ingredient of Bleurt is a novel pre-training scheme, which uses random perturbations of Wikipedia sentences augmented with a diverse set of lexical and semantic-level supervision signals.
To demonstrate our approach, we train Bleurt for English and evaluate it under different generalization regimes. We first verify that it provides state-of-the-art results on all recent years of the WMT Metrics Shared task (2017 to 2019, to-English language pairs). We then stress-test its ability to cope with quality drifts with a synthetic benchmark based on WMT 2017. Finally, we show that it can easily adapt to a different domain with three tasks from a data-to-text dataset, WebNLG 2017 BIBREF20. Ablations show that our synthetic pretraining scheme increases performance in the iid setting, and is critical to ensure robustness when the training data is scarce, skewed, or out-of-domain.
<<</Introduction>>>
<<<Preliminaries>>>
Define $= (x_1,..,x_{r})$ to be the reference sentence of length $r$ where each $x_i$ is a token and let $\tilde{} = (\tilde{x}_1,..,\tilde{x}_{p})$ be a prediction sentence of length $p$. Let $\lbrace (_i, \tilde{}_i, y_i)\rbrace _{n=1}^{N}$ be a training dataset of size $N$ where $y_i \in [0, 1]$ is the human rating that indicates how good $\tilde{}_i$ is with respect to $_i$. Given the training data, our goal is to learn a function $: (, \tilde{}) \rightarrow y$ that predicts the human rating.
<<</Preliminaries>>>
<<<Fine-Tuning BERT for Quality Evaluation>>>
Given the small amounts of rating data available, it is natural to leverage unsupervised representations for this task. In our model, we use BERT (Bidirectional Encoder Representations from Transformers) BIBREF19, which is an unsupervised technique that learns contextualized representations of sequences of text. Given $$ and $\tilde{}$, BERT is a Transformer BIBREF21 that returns a sequence of contextualized vectors:
where $_{\mathrm {[CLS]}}$ is the representation for the special $\mathrm {[CLS]}$ token. As described by devlin2018bert, we add a linear layer on top of the $\mathrm {[CLS]}$ vector to predict the rating:
where $$ and $$ are the weight matrix and bias vector respectively. Both the above linear layer as well as the BERT parameters are trained (i.e. fine-tuned) on the supervised data which typically numbers in a few thousand examples. We use the regression loss $\ell _{\textrm {supervised}} = \frac{1}{N} \sum _{n=1}^{N} \Vert y_i - \hat{y} \Vert ^2 $.
Although this approach is quite straightforward, we will show in Section SECREF5 that it gives state-of-the-art results on WMT Metrics Shared Task 17-19, which makes it a high-performing evaluation metric. However, fine-tuning BERT requires a sizable amount of iid data, which is less than ideal for a metric that should generalize to a variety of tasks and model drift.
<<</Fine-Tuning BERT for Quality Evaluation>>>
<<<Pre-Training on Synthetic Data>>>
The key aspect of our approach is a pre-training technique that we use to “warm up” BERT before fine-tuning on rating data. We generate a large number of of synthetic reference-candidate pairs $(, \tilde{})$, and we train BERT on several lexical- and semantic-level supervision signals with a multitask loss. As our experiments will show, Bleurt generalizes much better after this phase, especially with incomplete training data.
Any pre-training approach requires a dataset and a set of pre-training tasks. Ideally, the setup should resemble the final NLG evaluation task, i.e., the sentence pairs should be distributed similarly and the pre-training signals should correlate with human ratings. Unfortunately, we cannot have access to the NLG models that we will evaluate in the future. Therefore, we optimized our scheme for generality, with three requirements. (1) The set of reference sentences should be large and diverse, so that Bleurt can cope with a wide range of NLG domains and tasks. (2) The sentence pairs should contain a wide variety of lexical, syntactic, and semantic dissimilarities. The aim here is to anticipate all variations that an NLG system may produce, e.g., phrase substitution, paraphrases, noise, or omissions. (3) The pre-training objectives should effectively capture those phenomena, so that Bleurt can learn to identify them. The following sections present our approach.
<<<Generating Sentence Pairs>>>
One way to expose Bleurt to a wide variety of sentence differences is to use existing sentence pairs datasets BIBREF22, BIBREF23, BIBREF24. These sets are a rich source of related sentences, but they may fail to capture the errors and alterations that NLG systems produce (e.g., omissions, repetitions, nonsensical substitutions). We opted for an automatic approach instead, that can be scaled arbitrarily and at little cost: we generate synthetic sentence pairs $(, \tilde{})$ by randomly perturbing 1.8 million segments $$ from Wikipedia. We use three techniques: mask-filling with BERT, backtranslation, and randomly dropping out words. We obtain about 6.5 million perturbations $\tilde{}$. Let us describe those techniques.
<<<Mask-filling with BERT:>>>
BERT's initial training task is to fill gaps (i.e., masked tokens) in tokenized sentences. We leverage this functionality by inserting masks at random positions in the Wikipedia sentences, and fill them with the language model. Thus, we introduce lexical alterations while maintaining the fluency of the sentence. We use two masking strategies—we either introduce the masks at random positions in the sentences, or we create contiguous sequences of masked tokens. More details are provided in the Appendix.
<<</Mask-filling with BERT:>>>
<<<Backtranslation:>>>
We generate paraphrases and perturbations with backtranslation, that is, round trips from English to another language and then back to English with a translation model BIBREF25, BIBREF26, BIBREF27. Our primary aim is to create variants of the reference sentence that preserves semantics. Additionally, we use the mispredictions of the backtranslation models as a source of realistic alterations.
<<</Backtranslation:>>>
<<<Dropping words:>>>
We found it useful in our experiments to randomly drop words from the synthetic examples above to create other examples. This method prepares Bleurt for “pathological” behaviors or NLG systems, e.g., void predictions, or sentence truncation.
<<</Dropping words:>>>
<<</Generating Sentence Pairs>>>
<<<Pre-Training Signals>>>
The next step is to augment each sentence pair $(, \tilde{})$ with a set of pre-training signals $\lbrace {\tau }_k\rbrace $, where ${\tau }_k$ is the target vector of pre-training task $k$. Good pre-training signals should capture a wide variety of lexical and semantic differences. They should also be cheap to obtain, so that the approach can scale to large amounts of synthetic data. The following section presents our 9 pre-training tasks, summarized in Table TABREF3. Additional implementation details are in the Appendix.
<<<Automatic Metrics:>>>
We create three signals ${\tau _{\text{BLEU}}}$, ${\tau _{\text{ROUGE}}}$, and ${\tau _{\text{BERTscore}}}$ with sentence BLEU BIBREF13, ROUGE BIBREF14, and BERTscore BIBREF28 respectively (we use precision, recall and F-score for the latter two).
<<</Automatic Metrics:>>>
<<<Backtranslation Likelihood:>>>
The idea behind this signal is to leverage existing translation models to measure semantic equivalence. Given a pair $(, \tilde{})$, this training signal measures the probability that $\tilde{}$ is a backtranslation of $$, $P(\tilde{} | )$, normalized by the length of $\tilde{}$. Let $P_{\texttt {en}\rightarrow \texttt {fr}}(_{\texttt {fr}} | )$ be a translation model that assigns probabilities to French sentences $_{\texttt {fr}}$ conditioned on English sentences $$ and let $P_{\texttt {fr}\rightarrow \texttt {en}}(| _{\texttt {fr}})$ be a translation model that assigns probabilities to English sentences given french sentences. If $|\tilde{}|$ is the number of tokens in $\tilde{}$, we define our score as $ {\tau }_{\text{en-fr}, \tilde{} \mid } = \frac{\log P(\tilde{} | )}{|\tilde{}|}$, with:
Because computing the summation over all possible French sentences is intractable, we approximate the sum using $_{\texttt {fr}}^\ast = P_{\texttt {en}\rightarrow \texttt {fr}} (_{\texttt {fr}} | )$ and we assume that $P_{\texttt {en}\rightarrow \texttt {fr}}(_{\texttt {fr}}^\ast | ) \approx 1$:
We can trivially reverse the procedure to compute $P(| \tilde{})$, thus we create 4 pre-training signals ${\tau }_{\text{en-fr}, \mid \tilde{}}$, ${\tau }_{\text{en-fr}, \tilde{} \mid }$, ${\tau }_{\text{en-de}, \mid \tilde{}}$, ${\tau }_{\text{en-de}, \tilde{} \mid }$ with two pairs of languages ($\texttt {en}\leftrightarrow \texttt {de}$ and $\texttt {en}\leftrightarrow \texttt {fr}$) in both directions.
<<</Backtranslation Likelihood:>>>
<<<Textual Entailment:>>>
The signal ${\tau }_\text{entail}$ expresses whether $$ entails or contradicts $\tilde{}$ using a classifier. We report the probability of three labels: Entail, Contradict, and Neutral, using BERT fine-tuned on an entailment dataset, MNLI BIBREF19, BIBREF23.
<<</Textual Entailment:>>>
<<<Backtranslation flag:>>>
The signal ${\tau }_\text{backtran\_flag}$ is a Boolean that indicates whether the perturbation was generated with backtranslation or with mask-filling.
<<</Backtranslation flag:>>>
<<</Pre-Training Signals>>>
<<<Modeling>>>
For each pre-training task, our model uses either a regression or a classification loss. We then aggregate the task-level losses with a weighted sum.
Let ${\tau }_k$ describe the target vector for each task, e.g., the probabilities for the classes Entail, Contradict, Neutral, or the precision, recall, and F-score for ROUGE. If ${\tau }_k$ is a regression task, then the loss used is the $\ell _2$ loss i.e. $\ell _k = \Vert {\tau }_k - \hat{{\tau }}_k \Vert _2^2 / |{\tau }_k|$ where $|{\tau }_k|$ is the dimension of ${\tau }_k$ and $\hat{{\tau }}_k$ is computed by using a task-specific linear layer on top of the $\textrm {[CLS]}$ embedding: $\hat{{\tau }}_k = _{\tau _k} \tilde{}_{\textrm {[CLS]}} + _{\tau _k}$. If ${\tau }_k$ is a classification task, we use a separate linear layer to predict a logit for each class $c$: $\hat{{\tau }}_{kc} = _{\tau _{kc}} \tilde{}_{\textrm {[CLS]}} + _{\tau _{kc}}$, and we use the multiclass cross-entropy loss. We define our aggregate pre-training loss function as follows: pre-training = 1M m=1M k=1K k k(km, km) where ${\tau }_k^m$ is the target vector for example $m$, $M$ is number of synthetic examples, and $\gamma _k$ are hyperparameter weights obtained with grid search (more details in the Appendix).
<<</Modeling>>>
<<</Pre-Training on Synthetic Data>>>
<<<Experiments>>>
In this section, we report our experimental results for two tasks, translation and data-to-text. First, we benchmark Bleurt against existing text generation metrics on the last 3 years of the WMT Metrics Shared Task BIBREF29. We then evaluate its robustness to quality drifts with a series of synthetic datasets based on WMT17. We test Bleurt's ability to adapt to different tasks with the WebNLG 2017 Challenge Dataset BIBREF20. Finally, we measure the contribution of each pre-training task with ablation experiments.
<<<Our Models:>>>
Unless specified otherwise, all Bleurt models are trained in three steps: regular BERT pre-training BIBREF19, pre-training on synthetic data (as explained in Section SECREF4), and fine-tuning on task-specific ratings (translation and/or data-to-text). We experiment with two versions of Bleurt, BLEURT and BLEURTbase, respectively based on BERT-Large (24 layers, 1024 hidden units, 16 heads) and BERT-Base (12 layers, 768 hidden units, 12 heads) BIBREF19, both uncased. We use batch size 32, learning rate 1e-5, and 800,000 steps for pre-training and 40,000 steps for fine-tuning. We provide the full detail of our training setup in the Appendix.
<<</Our Models:>>>
<<<WMT Metrics Shared Task>>>
<<<Datasets and Metrics:>>>
We use years 2017 to 2019 of the WMT Metrics Shared Task, to-English language pairs. For each year, we used the official WMT test set, which include several thousand pairs of sentences with human ratings from the news domain. The training sets contain 5,360, 9,492, and 147,691 records for each year. The test sets for years 2018 and 2019 are noisier, as reported by the organizers and shown by the overall lower correlations.
We evaluate the agreement between the automatic metrics and the human ratings. For each year, we report two metrics: Kendall's Tau $\tau $ (for consistency across experiments), and the official WMT metric for that year (for completeness). The official WMT metric is either Pearson's correlation or a robust variant of Kendall's Tau called DARR, described in the Appendix. All the numbers come from our own implementation of the benchmark. Our results are globally consistent with the official results but we report small differences in 2018 and 2019, marked in the tables.
<<</Datasets and Metrics:>>>
<<<Models:>>>
We experiment with four versions of Bleurt: BLEURT, BLEURTbase, BLEURT -pre and BLEURTbase -pre. The first two models are based on BERT-large and BERT-base. In the latter two versions, we skip the pre-training phase and fine-tune directly on the WMT ratings. For each year of the WMT shared task, we use the test set from the previous years for training and validation. We describe our setup in further detail in the Appendix. We compare Bleurt to participant data from the shared task and automatic metrics that we ran ourselves. In the former case, we use the the best-performing contestants for each year, that is, chrF++, BEER, Meteor++, RUSE, Yisi1, ESIM and Yisi1-SRL BIBREF30. All the contestants use the same WMT training data, in addition to existing sentence or token embeddings. In the latter case, we use Moses sentenceBLEU, BERTscore BIBREF28, and MoverScore BIBREF31. For BERTscore, we use BERT-large uncased for fairness, and roBERTa (the recommended version) for completeness BIBREF32. We run MoverScore on WMT 2017 using the scripts published by the authors.
<<</Models:>>>
<<<Results:>>>
Tables TABREF14, TABREF15, TABREF16 show the results. For years 2017 and 2018, a Bleurt-based metric dominates the benchmark for each language pair (Tables TABREF14 and TABREF15). BLEURT and BLEURTbase are also competitive for year 2019: they yield the best results for every language pair on Kendall's Tau, and they come first for 4 out of 7 pairs on DARR. As expected, BLEURT dominates BLEURTbase in the majority of cases. Pre-training consistently improves the results of BLEURT and BLEURTbase. We observe the largest effect on year 2017, where it adds up to 7.4 Kendall Tau points for BLEURTbase (zh-en). The effect is milder on years 2018 and 2019, up to 2.1 points (tr-en, 2018). We explain the difference by the fact that the training data used for 2017 is smaller than the datasets used for the following years, so pre-training is likelier to help. In general pre-training yields higher returns for BERT-base than for BERT-large—in fact, BLEURTbase with pre-training is often better than BLEURT without.
Takeaways: Pre-training delivers consistent improvements, especially for BERT-base. Bleurt yields state-of-the art performance for all years of the WMT Metrics Shared task.
<<</Results:>>>
<<</WMT Metrics Shared Task>>>
<<<Robustness to Quality Drift>>>
We assess our claim that pre-training makes Bleurt robust to quality drifts, by constructing a series of tasks for which it is increasingly pressured to extrapolate. All the experiments that follow are based on the WMT Metrics Shared Task 2017, because the ratings for this edition are particularly reliable.
<<<Methodology:>>>
We create increasingly challenging datasets by sub-sampling the records from the WMT Metrics shared task, keeping low-rated translations for training and high-rated translations for test. The key parameter is the skew factor $\alpha $, that measures how much the training data is left-skewed and the test data is right-skewed. Figure FIGREF24 demonstrates the ratings distribution that we used in our experiments. The training data shrinks as $\alpha $ increases: in the most extreme case ($\alpha =3.0$), we use only 11.9% of the original 5,344 training records. We give the full detail of our sampling methodology in the Appendix.
We use BLEURT with and without pre-training and we compare to Moses sentBLEU and BERTscore. We use BERT-large uncased for both BLEURT and BERTscore.
<<</Methodology:>>>
<<<Takeaways:>>>
Pre-training makes BLEURT significantly more robust to quality drifts.
<<</Takeaways:>>>
<<</Robustness to Quality Drift>>>
<<<WebNLG Experiments>>>
In this section, we evaluate Bleurt's performance on three tasks from a data-to-text dataset, the WebNLG Challenge 2017 BIBREF33. The aim is to assess Bleurt's capacity to adapt to new tasks with limited training data.
<<<Dataset and Evaluation Tasks:>>>
The WebNLG challenge benchmarks systems that produce natural language description of entities (e.g., buildings, cities, artists) from sets of 1 to 5 RDF triples. The organizers released the human assessments for 9 systems over 223 inputs, that is, 4,677 sentence pairs in total (we removed null values). Each input comes with 1 to 3 reference descriptions. The submissions are evaluated on 3 aspects: semantics, grammar, and fluency. We treat each type of rating as a separate modeling task. The data has no natural split between train and test, therefore we experiment with several schemes. We allocate 0% to about 50% of the data to training, and we split on both the evaluated systems or the RDF inputs in order to test different generalization regimes.
<<</Dataset and Evaluation Tasks:>>>
<<<Systems and Baselines:>>>
BLEURT -pre -wmt, is a public BERT-large uncased checkpoint directly trained on the WebNLG ratings. BLEURT -wmtwas first pre-trained on synthetic data, then fine-tuned on WebNLG data. BLEURT was trained in three steps: first on synthetic data, then on WMT data (16-18), and finally on WebNLG data. When a record comes with several references, we run BLEURT on each reference and report the highest value BIBREF28.
We report four baselines: BLEU, TER, Meteor, and BERTscore. The first three were computed by the WebNLG competition organizers. We ran the latter one ourselves, using BERT-large uncased for a fair comparison.
<<</Systems and Baselines:>>>
<<</WebNLG Experiments>>>
<<<Ablation Experiments>>>
Figure FIGREF36 presents our ablation experiments on WMT 2017, which highlight the relative importance of each pre-training task. On the left side, we compare Bleurt pre-trained on a single task to Bleurt without pre-training. On the right side, we compare full Bleurt to Bleurt pre-trained on all tasks except one. Pre-training on BERTscore, entailment, and the backtranslation scores yield improvements (symmetrically, ablating them degrades Bleurt). Oppositely, BLEU and ROUGE have a negative impact. We conclude that pre-training on high quality signals helps BLEURT, but that metrics that correlate less well with human judgment may in fact harm the model.
<<</Ablation Experiments>>>
<<</Experiments>>>
<<<Related Work>>>
The WMT shared metrics competition BIBREF34, BIBREF18, BIBREF11 has inspired the creation of many learned metrics, some of which use regression or deep learning BIBREF35, BIBREF36, BIBREF37, BIBREF38, BIBREF30. Other metrics have been introduced, such as the recent MoverScore BIBREF31 which combines contextual embeddings and Earth Mover's Distance. We provide a head-to-head comparison with the best performing of those in our experiments. Other approaches do not attempt to estimate quality directly, but use information extraction or question answering as a proxy BIBREF7, BIBREF39, BIBREF40. Those are complementary to our work.
There has been recent work that uses BERT for evaluation. BERTScore BIBREF28 proposes replacing the hard n-gram overlap of BLEU with a soft-overlap using BERT embeddings. We use it in all our experiments. Bertr BIBREF30 and YiSi BIBREF30 also make use of BERT embeddings to compute a similarity score. Sum-QE BIBREF41 fine-tunes BERT for quality estimation as we describe in Section SECREF3. Our focus is different—we train metrics that are not only state-of-the-art in conventional iid experimental setups, but also robust in the presence of scarce and out-of-distribution training data. To our knowledge no existing work has explored pre-training and extrapolation in the context of NLG.
Noisy pre-training has been proposed before for other tasks such as paraphrasing BIBREF42, BIBREF43 but generally not with synthetic data. Generating synthetic data via paraphrases and perturbations has been commonly used for generating adversarial examples BIBREF44, BIBREF45, BIBREF46, BIBREF47, an orthogonal line of research.
<<</Related Work>>>
<<<Conclusion>>>
We presented Bleurt, a reference-based text generation metric for English. Because the metric is trained end-to-end, Bleurt can model human assessment with superior accuracy. Furthermore, pre-training makes the metrics robust particularly robust to both domain and quality drifts. Future research directions include multilingual NLG evaluation, and hybrid methods involving both humans and classifiers.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Related Work, Conclusion"
],
"type": "disordered_section"
}
|
2004.04696
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
BLEURT: Learning Robust Metrics for Text Generation
<<<Abstract>>>
Text generation has made significant advances in the last few years. Yet, evaluation metrics have lagged behind, as the most popular choices (e.g., BLEU and ROUGE) may correlate poorly with human judgments. We propose BLEURT, a learned evaluation metric based on BERT that can model human judgments with a few thousand possibly biased training examples. A key aspect of our approach is a novel pre-training scheme that uses millions of synthetic examples to help the model generalize. BLEURT provides state-of-the-art results on the last three years of the WMT Metrics shared task and the WebNLG Competition dataset. In contrast to a vanilla BERT-based approach, it yields superior results even when the training data is scarce and out-of-distribution.
<<</Abstract>>>
<<<Introduction>>>
In the last few years, research in natural text generation (NLG) has made significant progress, driven largely by the neural encoder-decoder paradigm BIBREF0, BIBREF1 which can tackle a wide array of tasks including translation BIBREF2, summarization BIBREF3, BIBREF4, structured-data-to-text generation BIBREF5, BIBREF6, BIBREF7 dialog BIBREF8, BIBREF9 and image captioning BIBREF10. However, progress is increasingly impeded by the shortcomings of existing metrics BIBREF7, BIBREF11, BIBREF12.
Human evaluation is often the best indicator of the quality of a system. However, designing crowd sourcing experiments is an expensive and high-latency process, which does not easily fit in a daily model development pipeline. Therefore, NLG researchers commonly use automatic evaluation metrics, which provide an acceptable proxy for quality and are very cheap to compute. This paper investigates sentence-level, reference-based metrics, which describe the extent to which a candidate sentence is similar to a reference one. The exact definition of similarity may range from string overlap to logical entailment.
The first generation of metrics relied on handcrafted rules that measure the surface similarity between the sentences. To illustrate, BLEU BIBREF13 and ROUGE BIBREF14, two popular metrics, rely on N-gram overlap. Because those metrics are only sensitive to lexical variation, they cannot appropriately reward semantic or syntactic variations of a given reference. Thus, they have been repeatedly shown to correlate poorly with human judgment, in particular when all the systems to compare have a similar level of accuracy BIBREF15, BIBREF16, BIBREF17.
Increasingly, NLG researchers have addressed those problems by injecting learned components in their metrics. To illustrate, consider the WMT Metrics Shared Task, an annual benchmark in which translation metrics are compared on their ability to imitate human assessments. The last two years of the competition were largely dominated by neural net-based approaches, RUSE, YiSi and ESIM BIBREF18, BIBREF11. Current approaches largely fall into two categories. Fully learned metrics, such as BEER, RUSE, and ESIM are trained end-to-end, and they typically rely on handcrafted features and/or learned embeddings. Conversely, hybrid metrics, such as YiSi and BERTscore combine trained elements, e.g., contextual embeddings, with handwritten logic, e.g., as token alignment rules. The first category typically offers great expressivity: if a training set of human ratings data is available, the metrics may take full advantage of it and fit the ratings distribution tightly. Furthermore, learned metrics can be tuned to measure task-specific properties, such as fluency, faithfulness, grammatically, or style. On the other hand, hybrid metrics offer robustness. They may provide better results when there is little to no training data, and they do not rely on the assumption that training and test data are identically distributed.
And indeed, the iid assumption is particularly problematic in NLG evaluation because of domain drifts, that have been the main target of the metrics literature, but also because of quality drifts: NLG systems tend to get better over time, and therefore a model trained on ratings data from 2015 may fail to distinguish top performing systems in 2019, especially for newer research tasks. An ideal learned metric would be able to both take full advantage of available ratings data for training, and be robust to distribution drifts, i.e., it should be able to extrapolate.
Our insight is that it is possible to combine expressivity and robustness by pre-training a fully learned metric on large amounts of synthetic data, before fine-tuning it on human ratings. To this end, we introduce Bleurt, a text generation metric based on BERT BIBREF19. A key ingredient of Bleurt is a novel pre-training scheme, which uses random perturbations of Wikipedia sentences augmented with a diverse set of lexical and semantic-level supervision signals.
To demonstrate our approach, we train Bleurt for English and evaluate it under different generalization regimes. We first verify that it provides state-of-the-art results on all recent years of the WMT Metrics Shared task (2017 to 2019, to-English language pairs). We then stress-test its ability to cope with quality drifts with a synthetic benchmark based on WMT 2017. Finally, we show that it can easily adapt to a different domain with three tasks from a data-to-text dataset, WebNLG 2017 BIBREF20. Ablations show that our synthetic pretraining scheme increases performance in the iid setting, and is critical to ensure robustness when the training data is scarce, skewed, or out-of-domain.
<<</Introduction>>>
<<<Preliminaries>>>
Define $= (x_1,..,x_{r})$ to be the reference sentence of length $r$ where each $x_i$ is a token and let $\tilde{} = (\tilde{x}_1,..,\tilde{x}_{p})$ be a prediction sentence of length $p$. Let $\lbrace (_i, \tilde{}_i, y_i)\rbrace _{n=1}^{N}$ be a training dataset of size $N$ where $y_i \in [0, 1]$ is the human rating that indicates how good $\tilde{}_i$ is with respect to $_i$. Given the training data, our goal is to learn a function $: (, \tilde{}) \rightarrow y$ that predicts the human rating.
<<</Preliminaries>>>
<<<Fine-Tuning BERT for Quality Evaluation>>>
Given the small amounts of rating data available, it is natural to leverage unsupervised representations for this task. In our model, we use BERT (Bidirectional Encoder Representations from Transformers) BIBREF19, which is an unsupervised technique that learns contextualized representations of sequences of text. Given $$ and $\tilde{}$, BERT is a Transformer BIBREF21 that returns a sequence of contextualized vectors:
where $_{\mathrm {[CLS]}}$ is the representation for the special $\mathrm {[CLS]}$ token. As described by devlin2018bert, we add a linear layer on top of the $\mathrm {[CLS]}$ vector to predict the rating:
where $$ and $$ are the weight matrix and bias vector respectively. Both the above linear layer as well as the BERT parameters are trained (i.e. fine-tuned) on the supervised data which typically numbers in a few thousand examples. We use the regression loss $\ell _{\textrm {supervised}} = \frac{1}{N} \sum _{n=1}^{N} \Vert y_i - \hat{y} \Vert ^2 $.
Although this approach is quite straightforward, we will show in Section SECREF5 that it gives state-of-the-art results on WMT Metrics Shared Task 17-19, which makes it a high-performing evaluation metric. However, fine-tuning BERT requires a sizable amount of iid data, which is less than ideal for a metric that should generalize to a variety of tasks and model drift.
<<</Fine-Tuning BERT for Quality Evaluation>>>
<<<Pre-Training on Synthetic Data>>>
The key aspect of our approach is a pre-training technique that we use to “warm up” BERT before fine-tuning on rating data. We generate a large number of of synthetic reference-candidate pairs $(, \tilde{})$, and we train BERT on several lexical- and semantic-level supervision signals with a multitask loss. As our experiments will show, Bleurt generalizes much better after this phase, especially with incomplete training data.
Any pre-training approach requires a dataset and a set of pre-training tasks. Ideally, the setup should resemble the final NLG evaluation task, i.e., the sentence pairs should be distributed similarly and the pre-training signals should correlate with human ratings. Unfortunately, we cannot have access to the NLG models that we will evaluate in the future. Therefore, we optimized our scheme for generality, with three requirements. (1) The set of reference sentences should be large and diverse, so that Bleurt can cope with a wide range of NLG domains and tasks. (2) The sentence pairs should contain a wide variety of lexical, syntactic, and semantic dissimilarities. The aim here is to anticipate all variations that an NLG system may produce, e.g., phrase substitution, paraphrases, noise, or omissions. (3) The pre-training objectives should effectively capture those phenomena, so that Bleurt can learn to identify them. The following sections present our approach.
<<<Generating Sentence Pairs>>>
One way to expose Bleurt to a wide variety of sentence differences is to use existing sentence pairs datasets BIBREF22, BIBREF23, BIBREF24. These sets are a rich source of related sentences, but they may fail to capture the errors and alterations that NLG systems produce (e.g., omissions, repetitions, nonsensical substitutions). We opted for an automatic approach instead, that can be scaled arbitrarily and at little cost: we generate synthetic sentence pairs $(, \tilde{})$ by randomly perturbing 1.8 million segments $$ from Wikipedia. We use three techniques: mask-filling with BERT, backtranslation, and randomly dropping out words. We obtain about 6.5 million perturbations $\tilde{}$. Let us describe those techniques.
<<<Mask-filling with BERT:>>>
BERT's initial training task is to fill gaps (i.e., masked tokens) in tokenized sentences. We leverage this functionality by inserting masks at random positions in the Wikipedia sentences, and fill them with the language model. Thus, we introduce lexical alterations while maintaining the fluency of the sentence. We use two masking strategies—we either introduce the masks at random positions in the sentences, or we create contiguous sequences of masked tokens. More details are provided in the Appendix.
<<</Mask-filling with BERT:>>>
<<<Backtranslation:>>>
We generate paraphrases and perturbations with backtranslation, that is, round trips from English to another language and then back to English with a translation model BIBREF25, BIBREF26, BIBREF27. Our primary aim is to create variants of the reference sentence that preserves semantics. Additionally, we use the mispredictions of the backtranslation models as a source of realistic alterations.
<<</Backtranslation:>>>
<<<Dropping words:>>>
We found it useful in our experiments to randomly drop words from the synthetic examples above to create other examples. This method prepares Bleurt for “pathological” behaviors or NLG systems, e.g., void predictions, or sentence truncation.
<<</Dropping words:>>>
<<</Generating Sentence Pairs>>>
<<<Pre-Training Signals>>>
The next step is to augment each sentence pair $(, \tilde{})$ with a set of pre-training signals $\lbrace {\tau }_k\rbrace $, where ${\tau }_k$ is the target vector of pre-training task $k$. Good pre-training signals should capture a wide variety of lexical and semantic differences. They should also be cheap to obtain, so that the approach can scale to large amounts of synthetic data. The following section presents our 9 pre-training tasks, summarized in Table TABREF3. Additional implementation details are in the Appendix.
<<<Automatic Metrics:>>>
We create three signals ${\tau _{\text{BLEU}}}$, ${\tau _{\text{ROUGE}}}$, and ${\tau _{\text{BERTscore}}}$ with sentence BLEU BIBREF13, ROUGE BIBREF14, and BERTscore BIBREF28 respectively (we use precision, recall and F-score for the latter two).
<<</Automatic Metrics:>>>
<<<Backtranslation Likelihood:>>>
The idea behind this signal is to leverage existing translation models to measure semantic equivalence. Given a pair $(, \tilde{})$, this training signal measures the probability that $\tilde{}$ is a backtranslation of $$, $P(\tilde{} | )$, normalized by the length of $\tilde{}$. Let $P_{\texttt {en}\rightarrow \texttt {fr}}(_{\texttt {fr}} | )$ be a translation model that assigns probabilities to French sentences $_{\texttt {fr}}$ conditioned on English sentences $$ and let $P_{\texttt {fr}\rightarrow \texttt {en}}(| _{\texttt {fr}})$ be a translation model that assigns probabilities to English sentences given french sentences. If $|\tilde{}|$ is the number of tokens in $\tilde{}$, we define our score as $ {\tau }_{\text{en-fr}, \tilde{} \mid } = \frac{\log P(\tilde{} | )}{|\tilde{}|}$, with:
Because computing the summation over all possible French sentences is intractable, we approximate the sum using $_{\texttt {fr}}^\ast = P_{\texttt {en}\rightarrow \texttt {fr}} (_{\texttt {fr}} | )$ and we assume that $P_{\texttt {en}\rightarrow \texttt {fr}}(_{\texttt {fr}}^\ast | ) \approx 1$:
We can trivially reverse the procedure to compute $P(| \tilde{})$, thus we create 4 pre-training signals ${\tau }_{\text{en-fr}, \mid \tilde{}}$, ${\tau }_{\text{en-fr}, \tilde{} \mid }$, ${\tau }_{\text{en-de}, \mid \tilde{}}$, ${\tau }_{\text{en-de}, \tilde{} \mid }$ with two pairs of languages ($\texttt {en}\leftrightarrow \texttt {de}$ and $\texttt {en}\leftrightarrow \texttt {fr}$) in both directions.
<<</Backtranslation Likelihood:>>>
<<<Textual Entailment:>>>
The signal ${\tau }_\text{entail}$ expresses whether $$ entails or contradicts $\tilde{}$ using a classifier. We report the probability of three labels: Entail, Contradict, and Neutral, using BERT fine-tuned on an entailment dataset, MNLI BIBREF19, BIBREF23.
<<</Textual Entailment:>>>
<<<Backtranslation flag:>>>
The signal ${\tau }_\text{backtran\_flag}$ is a Boolean that indicates whether the perturbation was generated with backtranslation or with mask-filling.
<<</Backtranslation flag:>>>
<<</Pre-Training Signals>>>
<<<Modeling>>>
For each pre-training task, our model uses either a regression or a classification loss. We then aggregate the task-level losses with a weighted sum.
Let ${\tau }_k$ describe the target vector for each task, e.g., the probabilities for the classes Entail, Contradict, Neutral, or the precision, recall, and F-score for ROUGE. If ${\tau }_k$ is a regression task, then the loss used is the $\ell _2$ loss i.e. $\ell _k = \Vert {\tau }_k - \hat{{\tau }}_k \Vert _2^2 / |{\tau }_k|$ where $|{\tau }_k|$ is the dimension of ${\tau }_k$ and $\hat{{\tau }}_k$ is computed by using a task-specific linear layer on top of the $\textrm {[CLS]}$ embedding: $\hat{{\tau }}_k = _{\tau _k} \tilde{}_{\textrm {[CLS]}} + _{\tau _k}$. If ${\tau }_k$ is a classification task, we use a separate linear layer to predict a logit for each class $c$: $\hat{{\tau }}_{kc} = _{\tau _{kc}} \tilde{}_{\textrm {[CLS]}} + _{\tau _{kc}}$, and we use the multiclass cross-entropy loss. We define our aggregate pre-training loss function as follows: pre-training = 1M m=1M k=1K k k(km, km) where ${\tau }_k^m$ is the target vector for example $m$, $M$ is number of synthetic examples, and $\gamma _k$ are hyperparameter weights obtained with grid search (more details in the Appendix).
<<</Modeling>>>
<<</Pre-Training on Synthetic Data>>>
<<<Experiments>>>
In this section, we report our experimental results for two tasks, translation and data-to-text. First, we benchmark Bleurt against existing text generation metrics on the last 3 years of the WMT Metrics Shared Task BIBREF29. We then evaluate its robustness to quality drifts with a series of synthetic datasets based on WMT17. We test Bleurt's ability to adapt to different tasks with the WebNLG 2017 Challenge Dataset BIBREF20. Finally, we measure the contribution of each pre-training task with ablation experiments.
<<<Our Models:>>>
Unless specified otherwise, all Bleurt models are trained in three steps: regular BERT pre-training BIBREF19, pre-training on synthetic data (as explained in Section SECREF4), and fine-tuning on task-specific ratings (translation and/or data-to-text). We experiment with two versions of Bleurt, BLEURT and BLEURTbase, respectively based on BERT-Large (24 layers, 1024 hidden units, 16 heads) and BERT-Base (12 layers, 768 hidden units, 12 heads) BIBREF19, both uncased. We use batch size 32, learning rate 1e-5, and 800,000 steps for pre-training and 40,000 steps for fine-tuning. We provide the full detail of our training setup in the Appendix.
<<</Our Models:>>>
<<<WMT Metrics Shared Task>>>
<<<Datasets and Metrics:>>>
We use years 2017 to 2019 of the WMT Metrics Shared Task, to-English language pairs. For each year, we used the official WMT test set, which include several thousand pairs of sentences with human ratings from the news domain. The training sets contain 5,360, 9,492, and 147,691 records for each year. The test sets for years 2018 and 2019 are noisier, as reported by the organizers and shown by the overall lower correlations.
We evaluate the agreement between the automatic metrics and the human ratings. For each year, we report two metrics: Kendall's Tau $\tau $ (for consistency across experiments), and the official WMT metric for that year (for completeness). The official WMT metric is either Pearson's correlation or a robust variant of Kendall's Tau called DARR, described in the Appendix. All the numbers come from our own implementation of the benchmark. Our results are globally consistent with the official results but we report small differences in 2018 and 2019, marked in the tables.
<<</Datasets and Metrics:>>>
<<<Models:>>>
We experiment with four versions of Bleurt: BLEURT, BLEURTbase, BLEURT -pre and BLEURTbase -pre. The first two models are based on BERT-large and BERT-base. In the latter two versions, we skip the pre-training phase and fine-tune directly on the WMT ratings. For each year of the WMT shared task, we use the test set from the previous years for training and validation. We describe our setup in further detail in the Appendix. We compare Bleurt to participant data from the shared task and automatic metrics that we ran ourselves. In the former case, we use the the best-performing contestants for each year, that is, chrF++, BEER, Meteor++, RUSE, Yisi1, ESIM and Yisi1-SRL BIBREF30. All the contestants use the same WMT training data, in addition to existing sentence or token embeddings. In the latter case, we use Moses sentenceBLEU, BERTscore BIBREF28, and MoverScore BIBREF31. For BERTscore, we use BERT-large uncased for fairness, and roBERTa (the recommended version) for completeness BIBREF32. We run MoverScore on WMT 2017 using the scripts published by the authors.
<<</Models:>>>
<<<Results:>>>
Tables TABREF14, TABREF15, TABREF16 show the results. For years 2017 and 2018, a Bleurt-based metric dominates the benchmark for each language pair (Tables TABREF14 and TABREF15). BLEURT and BLEURTbase are also competitive for year 2019: they yield the best results for every language pair on Kendall's Tau, and they come first for 4 out of 7 pairs on DARR. As expected, BLEURT dominates BLEURTbase in the majority of cases. Pre-training consistently improves the results of BLEURT and BLEURTbase. We observe the largest effect on year 2017, where it adds up to 7.4 Kendall Tau points for BLEURTbase (zh-en). The effect is milder on years 2018 and 2019, up to 2.1 points (tr-en, 2018). We explain the difference by the fact that the training data used for 2017 is smaller than the datasets used for the following years, so pre-training is likelier to help. In general pre-training yields higher returns for BERT-base than for BERT-large—in fact, BLEURTbase with pre-training is often better than BLEURT without.
Takeaways: Pre-training delivers consistent improvements, especially for BERT-base. Bleurt yields state-of-the art performance for all years of the WMT Metrics Shared task.
<<</Results:>>>
<<</WMT Metrics Shared Task>>>
<<<Robustness to Quality Drift>>>
We assess our claim that pre-training makes Bleurt robust to quality drifts, by constructing a series of tasks for which it is increasingly pressured to extrapolate. All the experiments that follow are based on the WMT Metrics Shared Task 2017, because the ratings for this edition are particularly reliable.
<<<Methodology:>>>
We create increasingly challenging datasets by sub-sampling the records from the WMT Metrics shared task, keeping low-rated translations for training and high-rated translations for test. The key parameter is the skew factor $\alpha $, that measures how much the training data is left-skewed and the test data is right-skewed. Figure FIGREF24 demonstrates the ratings distribution that we used in our experiments. The training data shrinks as $\alpha $ increases: in the most extreme case ($\alpha =3.0$), we use only 11.9% of the original 5,344 training records. We give the full detail of our sampling methodology in the Appendix.
We use BLEURT with and without pre-training and we compare to Moses sentBLEU and BERTscore. We use BERT-large uncased for both BLEURT and BERTscore.
<<</Methodology:>>>
<<<Takeaways:>>>
Pre-training makes BLEURT significantly more robust to quality drifts.
<<</Takeaways:>>>
<<</Robustness to Quality Drift>>>
<<<WebNLG Experiments>>>
In this section, we evaluate Bleurt's performance on three tasks from a data-to-text dataset, the WebNLG Challenge 2017 BIBREF33. The aim is to assess Bleurt's capacity to adapt to new tasks with limited training data.
<<<Dataset and Evaluation Tasks:>>>
The WebNLG challenge benchmarks systems that produce natural language description of entities (e.g., buildings, cities, artists) from sets of 1 to 5 RDF triples. The organizers released the human assessments for 9 systems over 223 inputs, that is, 4,677 sentence pairs in total (we removed null values). Each input comes with 1 to 3 reference descriptions. The submissions are evaluated on 3 aspects: semantics, grammar, and fluency. We treat each type of rating as a separate modeling task. The data has no natural split between train and test, therefore we experiment with several schemes. We allocate 0% to about 50% of the data to training, and we split on both the evaluated systems or the RDF inputs in order to test different generalization regimes.
<<</Dataset and Evaluation Tasks:>>>
<<<Systems and Baselines:>>>
BLEURT -pre -wmt, is a public BERT-large uncased checkpoint directly trained on the WebNLG ratings. BLEURT -wmtwas first pre-trained on synthetic data, then fine-tuned on WebNLG data. BLEURT was trained in three steps: first on synthetic data, then on WMT data (16-18), and finally on WebNLG data. When a record comes with several references, we run BLEURT on each reference and report the highest value BIBREF28.
We report four baselines: BLEU, TER, Meteor, and BERTscore. The first three were computed by the WebNLG competition organizers. We ran the latter one ourselves, using BERT-large uncased for a fair comparison.
<<</Systems and Baselines:>>>
<<</WebNLG Experiments>>>
<<<Ablation Experiments>>>
Figure FIGREF36 presents our ablation experiments on WMT 2017, which highlight the relative importance of each pre-training task. On the left side, we compare Bleurt pre-trained on a single task to Bleurt without pre-training. On the right side, we compare full Bleurt to Bleurt pre-trained on all tasks except one. Pre-training on BERTscore, entailment, and the backtranslation scores yield improvements (symmetrically, ablating them degrades Bleurt). Oppositely, BLEU and ROUGE have a negative impact. We conclude that pre-training on high quality signals helps BLEURT, but that metrics that correlate less well with human judgment may in fact harm the model.
<<</Ablation Experiments>>>
<<</Experiments>>>
<<<Related Work>>>
The WMT shared metrics competition BIBREF34, BIBREF18, BIBREF11 has inspired the creation of many learned metrics, some of which use regression or deep learning BIBREF35, BIBREF36, BIBREF37, BIBREF38, BIBREF30. Other metrics have been introduced, such as the recent MoverScore BIBREF31 which combines contextual embeddings and Earth Mover's Distance. We provide a head-to-head comparison with the best performing of those in our experiments. Other approaches do not attempt to estimate quality directly, but use information extraction or question answering as a proxy BIBREF7, BIBREF39, BIBREF40. Those are complementary to our work.
There has been recent work that uses BERT for evaluation. BERTScore BIBREF28 proposes replacing the hard n-gram overlap of BLEU with a soft-overlap using BERT embeddings. We use it in all our experiments. Bertr BIBREF30 and YiSi BIBREF30 also make use of BERT embeddings to compute a similarity score. Sum-QE BIBREF41 fine-tunes BERT for quality estimation as we describe in Section SECREF3. Our focus is different—we train metrics that are not only state-of-the-art in conventional iid experimental setups, but also robust in the presence of scarce and out-of-distribution training data. To our knowledge no existing work has explored pre-training and extrapolation in the context of NLG.
Noisy pre-training has been proposed before for other tasks such as paraphrasing BIBREF42, BIBREF43 but generally not with synthetic data. Generating synthetic data via paraphrases and perturbations has been commonly used for generating adversarial examples BIBREF44, BIBREF45, BIBREF46, BIBREF47, an orthogonal line of research.
<<</Related Work>>>
<<<Conclusion>>>
We presented Bleurt, a reference-based text generation metric for English. Because the metric is trained end-to-end, Bleurt can model human assessment with superior accuracy. Furthermore, pre-training makes the metrics robust particularly robust to both domain and quality drifts. Future research directions include multilingual NLG evaluation, and hybrid methods involving both humans and classifiers.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Conclusion, Preliminaries"
],
"type": "disordered_section"
}
|
1911.05960
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Contextual Recurrent Units for Cloze-style Reading Comprehension
<<<Abstract>>>
Recurrent Neural Networks (RNN) are known as powerful models for handling sequential data, and especially widely utilized in various natural language processing tasks. In this paper, we propose Contextual Recurrent Units (CRU) for enhancing local contextual representations in neural networks. The proposed CRU injects convolutional neural networks (CNN) into the recurrent units to enhance the ability to model the local context and reducing word ambiguities even in bi-directional RNNs. We tested our CRU model on sentence-level and document-level modeling NLP tasks: sentiment classification and reading comprehension. Experimental results show that the proposed CRU model could give significant improvements over traditional CNN or RNN models, including bidirectional conditions, as well as various state-of-the-art systems on both tasks, showing its promising future of extensibility to other NLP tasks as well.
<<</Abstract>>>
<<<Introduction>>>
Neural network based approaches have become popular frameworks in many machine learning research fields, showing its advantages over traditional methods. In NLP tasks, two types of neural networks are widely used: Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN).
RNNs are powerful models in various NLP tasks, such as machine translation BIBREF0, sentiment classification BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, reading comprehension BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, etc. The recurrent neural networks can flexibly model different lengths of sequences into a fixed representation. There are two main implementations of RNN: Long Short-Term Memory (LSTM) BIBREF12 and Gated Recurrent Unit (GRU) BIBREF0, which solve the gradient vanishing problems in vanilla RNNs.
Compared to RNN, the CNN model also shows competitive performances in some tasks, such as text classification BIBREF13, etc. However, different from RNN, CNN sets a pre-defined convolutional kernel to “summarize” a fixed window of adjacent elements into blended representations, showing its ability of modeling local context.
As both global and local information is important in most of NLP tasks BIBREF14, in this paper, we propose a novel recurrent unit, called Contextual Recurrent Unit (CRU). The proposed CRU model adopts advantages of RNN and CNN, where CNN is good at modeling local context, and RNN is superior in capturing long-term dependencies. We propose three variants of our CRU model: shallow fusion, deep fusion and deep-enhanced fusion.
To verify the effectiveness of our CRU model, we utilize it into two different NLP tasks: sentiment classification and reading comprehension, where the former is sentence-level modeling, and the latter is document-level modeling. In the sentiment classification task, we build a standard neural network and replace the recurrent unit by our CRU model. To further demonstrate the effectiveness of our model, we also tested our CRU in reading comprehension tasks with a strengthened baseline system originated from Attention-over-Attention Reader (AoA Reader) BIBREF10. Experimental results on public datasets show that our CRU model could substantially outperform various systems by a large margin, and set up new state-of-the-art performances on related datasets. The main contributions of our work are listed as follows.
[leftmargin=*]
We propose a novel neural recurrent unit called Contextual Recurrent Unit (CRU), which effectively incorporate the advantage of CNN and RNN. Different from previous works, our CRU model shows its excellent flexibility as GRU and provides better performance.
The CRU model is applied to both sentence-level and document-level modeling tasks and gives state-of-the-art performances.
The CRU could also give substantial improvements in cloze-style reading comprehension task when the baseline system is strengthened by incorporating additional features which will enrich the representations of unknown words and make the texts more readable to the machine.
<<</Introduction>>>
<<<Related Works>>>
Gated recurrent unit (GRU) has been proposed in the scenario of neural machine translations BIBREF0. It has been shown that the GRU has comparable performance in some tasks compared to the LSTM. Another advantage of GRU is that it has a simpler neural architecture than LSTM, showing a much efficient computation.
However, convolutional neural network (CNN) is not as popular as RNNs in NLP tasks, as the texts are formed temporally. But in some studies, CNN shows competitive performance to the RNN models, such as text classification BIBREF13.
Various efforts have been made on combining CNN and RNN. BIBREF3 proposed an architecture that combines CNN and GRU model with pre-trained word embeddings by word2vec. BIBREF5 proposed to combine asymmetric convolution neural network with the bidirectional LSTM network. BIBREF4 presented Dependency Sensitive CNN, which hierarchically construct text by using LSTMs and extracting features with convolution operations subsequently. BIBREF15 propose to make use of dependency relations information in the shortest dependency path (SDP) by combining CNN and two-channel LSTM units. BIBREF16 build a neural network for dialogue topic tracking where the CNN used to account for semantics at individual utterance and RNN for modeling conversational contexts along multiple turns in history.
The difference between our CRU model and previous works can be concluded as follows.
[leftmargin=*]
Our CRU model could adaptively control the amount of information that flows into different gates, which was not studied in previous works.
Also, the CRU does not introduce a pooling operation, as opposed to other works, such as CNN-GRU BIBREF3. Our motivation is to provide flexibility as the original GRU, while the pooling operation breaks this law (the output length is changed), and it is unable to do exact word-level attention over the output. However, in our CRU model, the output length is the same as the input's and can be easily applied to various tasks where the GRU used to.
We also observed that by only using CNN to conclude contextual information is not strong enough. So we incorporate the original word embeddings to form a "word + context" representation for enhancement.
<<</Related Works>>>
<<<Our approach>>>
In this section, we will give a detailed introduction to our CRU model. Firstly, we will give a brief introduction to GRU BIBREF0 as preliminaries, and then three variants of our CRU model will be illustrated.
<<<Gated Recurrent Unit>>>
Gated Recurrent Unit (GRU) is a type of recurrent unit that models sequential data BIBREF0, which is similar to LSTM but is much simpler and computationally effective than the latter one. We will briefly introduce the formulation of GRU. Given a sequence $x = \lbrace x_1, x_2, ..., x_n\rbrace $, GRU will process the data in the following ways. For simplicity, the bias term is omitted in the following equations.
where $z_t$ is the update gate, $r_t$ is the reset gate, and non-linear function $\sigma $ is often chosen as $sigmoid$ function. In many NLP tasks, we often use a bi-directional GRU, which takes both forward and backward information into account.
<<</Gated Recurrent Unit>>>
<<<Contextual Recurrent Unit>>>
By only modeling word-level representation may have drawbacks in representing the word that has different meanings when the context varies. Here is an example that shows this problem.
There are many fan mails in the mailbox.
There are many fan makers in the factory.
As we can see that, though two sentences share the same beginning before the word fan, the meanings of the word fan itself are totally different when we meet the following word mails and makers. The first fan means “a person that has strong interests in a person or thing", and the second one means “a machine with rotating blades for ventilation". However, the embedding of word fan does not discriminate according to the context. Also, as two sentences have the same beginning, when we apply a recurrent operation (such as GRU) till the word fan, the output of GRU does not change, though they have entirely different meanings when we see the following words.
To enrich the word representation with local contextual information and diminishing the word ambiguities, we propose a model as an extension to the GRU, called Contextual Recurrent Unit (CRU). In this model, we take full advantage of the convolutional neural network and recurrent neural network, where the former is good at modeling local information, and the latter is capable of capturing long-term dependencies. Moreover, in the experiment part, we will also show that our bidirectional CRU could also significantly outperform the bidirectional GRU model.
In this paper, we propose three different types of CRU models: shallow fusion, deep fusion and deep-enhanced fusion, from the most fundamental one to the most expressive one. We will describe these models in detail in the following sections.
<<<Shallow Fusion>>>
The most simple one is to directly apply a CNN layer after the embedding layer to obtain blended contextual representations. Then a GRU layer is applied afterward. We call this model as shallow fusion, because the CNN and RNN are applied linearly without changing inner architectures of both.
Formally, when given a sequential data $x = \lbrace x_1, x_2, ..., x_n\rbrace $, a shallow fusion of CRU can be illustrated as follows.
We first transform word $x_t$ into word embeddings through an embedding matrix $W_e$. Then a convolutional operation $\phi $ is applied to the context of $e_t$, denoted as $\widetilde{e_t}$, to obtain contextual representations. Finally, the contextual representation $c_t$ is fed into GRU units.
Following BIBREF13, we apply embedding-wise convolution operation, which is commonly used in natural language processing tasks. Let $e_{i:j} \in \mathbb {R}^{\mathcal {\\}j*d}$ denote the concatenation of $j-i+1$ consecutive $d$-dimensional word embeddings.
The embedding-wise convolution is to apply a convolution filter w $\in \mathbb {R}^{\mathcal {\\}k*d}$ to a window of $k$ word embeddings to generate a new feature, i.e., summarizing a local context of $k$ words. This can be formulated as
where $f$ is a non-linear function and $b$ is the bias.
By applying the convolutional filter to all possible windows in the sentence, a feature map $c$ will be generated. In this paper, we apply a same-length convolution (length of the sentence does not change), i.e. $c \in \mathbb {R}^{\mathcal {\\}n*1}$. Then we apply $d$ filters with the same window size to obtain multiple feature maps. So the final output of CNN has the shape of $C \in \mathbb {R}^{\mathcal {\\}n*d}$, which is exactly the same size as $n$ word embeddings, which enables us to do exact word-level attention in various tasks.
<<</Shallow Fusion>>>
<<<Deep Fusion>>>
The contextual information that flows into the update gate and reset gate of GRU is identical in shallow fusion. In order to let the model adaptively control the amount of information that flows into these gates, we can embed CNN into GRU in a deep manner. We can rewrite the Equation 1 to 3 of GRU as follows.
where $\phi _z, \phi _r, \phi $ are three different CNN layers, i.e., the weights are not shared. When the weights share across these CNNs, the deep fusion will be degraded to shallow fusion.
<<</Deep Fusion>>>
<<<Deep-Enhanced Fusion>>>
In shallow fusion and deep fusion, we used the convolutional operation to summarize the context. However, one drawback of them is that the original word embedding might be blurred by blending the words around it, i.e., applying the convolutional operation on its context.
For better modeling the original word and its context, we enhanced the deep fusion model with original word embedding information, with an intuition of “enriching word representation with contextual information while preserving its basic meaning”. Figure FIGREF17 illustrates our motivations.
Formally, the Equation 9 to 11 can be further rewritten into
where we add original word embedding $e_t$ after the CNN operation, to “enhance” the original word information while not losing the contextual information that has learned from CNNs.
<<</Deep-Enhanced Fusion>>>
<<</Contextual Recurrent Unit>>>
<<</Our approach>>>
<<<Applications>>>
The proposed CRU model is a general neural recurrent unit, so we could apply it to various NLP tasks. As we wonder whether the CRU model could give improvements in both sentence-level modeling and document-level modeling tasks, in this paper, we applied the CRU model to two NLP tasks: sentiment classification and cloze-style reading comprehension. In the sentiment classification task, we build a simple neural model and applied our CRU. In the cloze-style reading comprehension task, we first present some modifications to a recent reading comprehension model, called AoA Reader BIBREF10, and then replace the GRU part by our CRU model to see if our model could give substantial improvements over strong baselines.
<<<Sentiment Classification>>>
In the sentiment classification task, we aim to classify movie reviews, where one movie review will be classified into the positive/negative or subjective/objective category. A general neural network architecture for this task is depicted in Figure FIGREF20.
First, the movie review is transformed into word embeddings. And then, a sequence modeling module is applied, in which we can adopt LSTM, GRU, or our CRU, to capture the inner relations of the text. In this paper, we adopt bidirectional recurrent units for modeling sentences, and then the final hidden outputs are concatenated. After that, a fully connected layer will be added after sequence modeling. Finally, the binary decision is made through a single $sigmoid$ unit.
As shown, we employed a straightforward neural architecture to this task, as we purely want to compare our CRU model against other sequential models. The detailed experimental result of sentiment classification will be given in the next section.
<<</Sentiment Classification>>>
<<<Reading Comprehension>>>
Besides the sentiment classification task, we also tried our CRU model in cloze-style reading comprehension, which is a much complicated task. In this paper, we strengthened the recent AoA Reader BIBREF10 and applied our CRU model to see if we could obtain substantial improvements when the baseline is strengthened.
<<<Task Description>>>
The cloze-style reading comprehension is a fundamental task that explores relations between the document and the query. Formally, a general cloze-style query can be illustrated as a triple $\langle {\mathcal {D}}, {\mathcal {Q}}, {\mathcal {A}} \rangle $, where $\mathcal {D}$ is the document, $\mathcal {Q}$ is the query and the answer $\mathcal {A}$. Note that the answer is a single word in the document, which requires us to exploit the relationship between the document and query.
<<</Task Description>>>
<<<Modified AoA Reader>>>
In this section, we briefly introduce the original AoA Reader BIBREF10, and illustrate our modifications. When a cloze-style training triple $\langle \mathcal {D}, \mathcal {Q}, \mathcal {A} \rangle $ is given, the Modified AoA Reader will be constructed in the following steps. First, the document and query will be transformed into continuous representations with the embedding layer and recurrent layer. The recurrent layer can be the simple RNN, GRU, LSTM, or our CRU model.
To further strengthen the representation power, we show a simple modification in the embedding layer, where we found strong empirical results in performance. The main idea is to utilize additional sparse features of the word and add (concatenate) these features to the word embeddings to enrich the word representations. The additional features have shown effective in various models BIBREF7, BIBREF17, BIBREF11. In this paper, we adopt two additional features in document word embeddings (no features applied to the query side).
$\bullet $ Document word frequency: Calculate each document word frequency. This helps the model to pay more attention to the important (more mentioned) part of the document.
$\bullet $ Count of query word: Count the number of each document word appeared in the query. For example, if a document word appears three times in the query, then the feature value will be 3. We empirically find that instead of using binary features (appear=1, otherwise=0) BIBREF17, indicating the count of the word provides more information, suggesting that the more a word occurs in the query, the less possible the answer it will be. We replace the Equation 16 with the following formulation (query side is not changed),
where $freq(x)$ and $CoQ(x)$ are the features that introduced above.
Other parts of the model remain the same as the original AoA Reader. For simplicity, we will omit this part, and the detailed illustrations can be found in BIBREF10.
<<</Modified AoA Reader>>>
<<</Reading Comprehension>>>
<<</Applications>>>
<<<Experiments: Sentiment Classification>>>
<<<Experimental Setups>>>
In the sentiment classification task, we tried our model on the following public datasets.
[leftmargin=*]
MR Movie reviews with one sentence each. Each review is classified into positive or negative BIBREF18.
IMDB Movie reviews from IMDB website, where each movie review is labeled with binary classes, either positive or negative BIBREF19. Note that each movie review may contain several sentences.
SUBJ$^1$ Movie review labeled with subjective or objective BIBREF20.
The statistics and hyper-parameter settings of these datasets are listed in Table TABREF33.
As these datasets are quite small and overfit easily, we employed $l_2$-regularization of 0.0001 to the embedding layer in all datasets. Also, we applied dropout BIBREF21 to the output of the embedding layer and fully connected layer. The fully connected layer has a dimension of 1024. In the MR and SUBJ, the embedding layer is initialized with 200-dimensional GloVe embeddings (trained on 840B token) BIBREF22 and fine-tuned during the training process. In the IMDB condition, the vocabulary is truncated by descending word frequency order. We adopt batched training strategy of 32 samples with ADAM optimizer BIBREF23, and clipped gradient to 5 BIBREF24. Unless indicated, the convolutional filter length is set to 3, and ReLU for the non-linear function of CNN in all experiments. We use 10-fold cross-validation (CV) in the dataset that has no train/valid/test division.
<<</Experimental Setups>>>
<<<Results>>>
The experimental results are shown in Table TABREF35. As we mentioned before, all RNNs in these models are bi-directional, because we wonder if our bi-CRU could still give substantial improvements over bi-GRU which could capture both history and future information. As we can see that, all variants of our CRU model could give substantial improvements over the traditional GRU model, where a maximum gain of 2.7%, 1.0%, and 1.9% can be observed in three datasets, respectively. We also found that though we adopt a straightforward classification model, our CRU model could outperform the state-of-the-art systems by 0.6%, 0.7%, and 0.8% gains respectively, which demonstrate its effectiveness. By employing more sophisticated architecture or introducing task-specific features, we think there is still much room for further improvements, which is beyond the scope of this paper.
When comparing three variants of the CRU model, as we expected, the CRU with deep-enhanced fusion performs best among them. This demonstrates that by incorporating contextual representations with original word embedding could enhance the representation power. Also, we noticed that when we tried a larger window size of the convolutional filter, i.e., 5 in this experiment, does not give a rise in the performance. We plot the trends of MR test set accuracy with the increasing convolutional filter length, as shown in Figure FIGREF36.
As we can see that, using a smaller convolutional filter does not provide much contextual information, thus giving a lower accuracy. On the contrary, the larger filters generally outperform the lower ones, but not always. One possible reason for this is that when the filter becomes larger, the amortized contextual information is less than a smaller filter, and make it harder for the model to learn the contextual information. However, we think the proper size of the convolutional filter may vary task by task. Some tasks that require long-span contextual information may benefit from a larger filter.
We also compared our CRU model with related works that combine CNN and RNN BIBREF3, BIBREF4, BIBREF5. From the results, we can see that our CRU model significantly outperforms previous works, which demonstrates that by employing deep fusion and enhancing the contextual representations with original embeddings could substantially improve the power of word representations.
On another aspect, we plot the trends of IMDB test set accuracy during the training process, as depicted in Figure FIGREF37. As we can see that, after iterating six epochs of training data, all variants of CRU models show faster convergence speed and smaller performance fluctuation than the traditional GRU model, which demonstrates that the proposed CRU model has better training stability.
<<</Results>>>
<<</Experiments: Sentiment Classification>>>
<<<Experiments: Reading Comprehension>>>
<<</Experiments: Reading Comprehension>>>
<<<Qualitative Analysis>>>
In this section, we will give a qualitative analysis on our proposed CRU model in the sentiment classification task. We focus on two categories of the movie reviews, which is quite harder for the model to judge the correct sentiment. The first one is the movie review that contains negation terms, such as “not”. The second type is the one contains sentiment transition, such as “clever but not compelling”. We manually select 50 samples of each category in the MR dataset, forming a total of 100 samples to see if our CRU model is superior in handling these movie reviews. The results are shown in Table TABREF45. As we can see that, our CRU model is better at both categories of movie review classification, demonstrating its effectiveness.
Among these samples, we select an intuitive example that the CRU successfully captures the true meaning of the sentence and gives the correct sentiment label. We segment a full movie review into three sentences, which is shown in Table TABREF46.
Regarding the first and second sentence, both models give correct sentiment prediction. While introducing the third sentence, the GRU baseline model failed to recognize this review as a positive sentiment because there are many negation terms in the sentence. However, our CRU model could capture the local context during the recurrent modeling the sentence, and the phrases such as “not making fun” and “not laughing at” could be correctly noted as positive sentiment which will correct the sentiment category of the full review, suggesting that our model is superior at modeling local context and gives much accurate meaning.
<<</Qualitative Analysis>>>
<<<Conclusion>>>
In this paper, we proposed an effective recurrent model for modeling sequences, called Contextual Recurrent Units (CRU). We inject the CNN into GRU, which aims to better model the local context information via CNN before recurrently modeling the sequence. We have tested our CRU model on the cloze-style reading comprehension task and sentiment classification task. Experimental results show that our model could give substantial improvements over various state-of-the-art systems and set up new records on the respective public datasets. In the future, we plan to investigate convolutional filters that have dynamic lengths to adaptively capture the possible spans of its context.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Conclusion, Experiments: Sentiment Classification"
],
"type": "disordered_section"
}
|
1911.05960
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Contextual Recurrent Units for Cloze-style Reading Comprehension
<<<Abstract>>>
Recurrent Neural Networks (RNN) are known as powerful models for handling sequential data, and especially widely utilized in various natural language processing tasks. In this paper, we propose Contextual Recurrent Units (CRU) for enhancing local contextual representations in neural networks. The proposed CRU injects convolutional neural networks (CNN) into the recurrent units to enhance the ability to model the local context and reducing word ambiguities even in bi-directional RNNs. We tested our CRU model on sentence-level and document-level modeling NLP tasks: sentiment classification and reading comprehension. Experimental results show that the proposed CRU model could give significant improvements over traditional CNN or RNN models, including bidirectional conditions, as well as various state-of-the-art systems on both tasks, showing its promising future of extensibility to other NLP tasks as well.
<<</Abstract>>>
<<<Introduction>>>
Neural network based approaches have become popular frameworks in many machine learning research fields, showing its advantages over traditional methods. In NLP tasks, two types of neural networks are widely used: Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN).
RNNs are powerful models in various NLP tasks, such as machine translation BIBREF0, sentiment classification BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, reading comprehension BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, etc. The recurrent neural networks can flexibly model different lengths of sequences into a fixed representation. There are two main implementations of RNN: Long Short-Term Memory (LSTM) BIBREF12 and Gated Recurrent Unit (GRU) BIBREF0, which solve the gradient vanishing problems in vanilla RNNs.
Compared to RNN, the CNN model also shows competitive performances in some tasks, such as text classification BIBREF13, etc. However, different from RNN, CNN sets a pre-defined convolutional kernel to “summarize” a fixed window of adjacent elements into blended representations, showing its ability of modeling local context.
As both global and local information is important in most of NLP tasks BIBREF14, in this paper, we propose a novel recurrent unit, called Contextual Recurrent Unit (CRU). The proposed CRU model adopts advantages of RNN and CNN, where CNN is good at modeling local context, and RNN is superior in capturing long-term dependencies. We propose three variants of our CRU model: shallow fusion, deep fusion and deep-enhanced fusion.
To verify the effectiveness of our CRU model, we utilize it into two different NLP tasks: sentiment classification and reading comprehension, where the former is sentence-level modeling, and the latter is document-level modeling. In the sentiment classification task, we build a standard neural network and replace the recurrent unit by our CRU model. To further demonstrate the effectiveness of our model, we also tested our CRU in reading comprehension tasks with a strengthened baseline system originated from Attention-over-Attention Reader (AoA Reader) BIBREF10. Experimental results on public datasets show that our CRU model could substantially outperform various systems by a large margin, and set up new state-of-the-art performances on related datasets. The main contributions of our work are listed as follows.
[leftmargin=*]
We propose a novel neural recurrent unit called Contextual Recurrent Unit (CRU), which effectively incorporate the advantage of CNN and RNN. Different from previous works, our CRU model shows its excellent flexibility as GRU and provides better performance.
The CRU model is applied to both sentence-level and document-level modeling tasks and gives state-of-the-art performances.
The CRU could also give substantial improvements in cloze-style reading comprehension task when the baseline system is strengthened by incorporating additional features which will enrich the representations of unknown words and make the texts more readable to the machine.
<<</Introduction>>>
<<<Related Works>>>
Gated recurrent unit (GRU) has been proposed in the scenario of neural machine translations BIBREF0. It has been shown that the GRU has comparable performance in some tasks compared to the LSTM. Another advantage of GRU is that it has a simpler neural architecture than LSTM, showing a much efficient computation.
However, convolutional neural network (CNN) is not as popular as RNNs in NLP tasks, as the texts are formed temporally. But in some studies, CNN shows competitive performance to the RNN models, such as text classification BIBREF13.
Various efforts have been made on combining CNN and RNN. BIBREF3 proposed an architecture that combines CNN and GRU model with pre-trained word embeddings by word2vec. BIBREF5 proposed to combine asymmetric convolution neural network with the bidirectional LSTM network. BIBREF4 presented Dependency Sensitive CNN, which hierarchically construct text by using LSTMs and extracting features with convolution operations subsequently. BIBREF15 propose to make use of dependency relations information in the shortest dependency path (SDP) by combining CNN and two-channel LSTM units. BIBREF16 build a neural network for dialogue topic tracking where the CNN used to account for semantics at individual utterance and RNN for modeling conversational contexts along multiple turns in history.
The difference between our CRU model and previous works can be concluded as follows.
[leftmargin=*]
Our CRU model could adaptively control the amount of information that flows into different gates, which was not studied in previous works.
Also, the CRU does not introduce a pooling operation, as opposed to other works, such as CNN-GRU BIBREF3. Our motivation is to provide flexibility as the original GRU, while the pooling operation breaks this law (the output length is changed), and it is unable to do exact word-level attention over the output. However, in our CRU model, the output length is the same as the input's and can be easily applied to various tasks where the GRU used to.
We also observed that by only using CNN to conclude contextual information is not strong enough. So we incorporate the original word embeddings to form a "word + context" representation for enhancement.
<<</Related Works>>>
<<<Our approach>>>
In this section, we will give a detailed introduction to our CRU model. Firstly, we will give a brief introduction to GRU BIBREF0 as preliminaries, and then three variants of our CRU model will be illustrated.
<<<Gated Recurrent Unit>>>
Gated Recurrent Unit (GRU) is a type of recurrent unit that models sequential data BIBREF0, which is similar to LSTM but is much simpler and computationally effective than the latter one. We will briefly introduce the formulation of GRU. Given a sequence $x = \lbrace x_1, x_2, ..., x_n\rbrace $, GRU will process the data in the following ways. For simplicity, the bias term is omitted in the following equations.
where $z_t$ is the update gate, $r_t$ is the reset gate, and non-linear function $\sigma $ is often chosen as $sigmoid$ function. In many NLP tasks, we often use a bi-directional GRU, which takes both forward and backward information into account.
<<</Gated Recurrent Unit>>>
<<<Contextual Recurrent Unit>>>
By only modeling word-level representation may have drawbacks in representing the word that has different meanings when the context varies. Here is an example that shows this problem.
There are many fan mails in the mailbox.
There are many fan makers in the factory.
As we can see that, though two sentences share the same beginning before the word fan, the meanings of the word fan itself are totally different when we meet the following word mails and makers. The first fan means “a person that has strong interests in a person or thing", and the second one means “a machine with rotating blades for ventilation". However, the embedding of word fan does not discriminate according to the context. Also, as two sentences have the same beginning, when we apply a recurrent operation (such as GRU) till the word fan, the output of GRU does not change, though they have entirely different meanings when we see the following words.
To enrich the word representation with local contextual information and diminishing the word ambiguities, we propose a model as an extension to the GRU, called Contextual Recurrent Unit (CRU). In this model, we take full advantage of the convolutional neural network and recurrent neural network, where the former is good at modeling local information, and the latter is capable of capturing long-term dependencies. Moreover, in the experiment part, we will also show that our bidirectional CRU could also significantly outperform the bidirectional GRU model.
In this paper, we propose three different types of CRU models: shallow fusion, deep fusion and deep-enhanced fusion, from the most fundamental one to the most expressive one. We will describe these models in detail in the following sections.
<<<Shallow Fusion>>>
The most simple one is to directly apply a CNN layer after the embedding layer to obtain blended contextual representations. Then a GRU layer is applied afterward. We call this model as shallow fusion, because the CNN and RNN are applied linearly without changing inner architectures of both.
Formally, when given a sequential data $x = \lbrace x_1, x_2, ..., x_n\rbrace $, a shallow fusion of CRU can be illustrated as follows.
We first transform word $x_t$ into word embeddings through an embedding matrix $W_e$. Then a convolutional operation $\phi $ is applied to the context of $e_t$, denoted as $\widetilde{e_t}$, to obtain contextual representations. Finally, the contextual representation $c_t$ is fed into GRU units.
Following BIBREF13, we apply embedding-wise convolution operation, which is commonly used in natural language processing tasks. Let $e_{i:j} \in \mathbb {R}^{\mathcal {\\}j*d}$ denote the concatenation of $j-i+1$ consecutive $d$-dimensional word embeddings.
The embedding-wise convolution is to apply a convolution filter w $\in \mathbb {R}^{\mathcal {\\}k*d}$ to a window of $k$ word embeddings to generate a new feature, i.e., summarizing a local context of $k$ words. This can be formulated as
where $f$ is a non-linear function and $b$ is the bias.
By applying the convolutional filter to all possible windows in the sentence, a feature map $c$ will be generated. In this paper, we apply a same-length convolution (length of the sentence does not change), i.e. $c \in \mathbb {R}^{\mathcal {\\}n*1}$. Then we apply $d$ filters with the same window size to obtain multiple feature maps. So the final output of CNN has the shape of $C \in \mathbb {R}^{\mathcal {\\}n*d}$, which is exactly the same size as $n$ word embeddings, which enables us to do exact word-level attention in various tasks.
<<</Shallow Fusion>>>
<<<Deep Fusion>>>
The contextual information that flows into the update gate and reset gate of GRU is identical in shallow fusion. In order to let the model adaptively control the amount of information that flows into these gates, we can embed CNN into GRU in a deep manner. We can rewrite the Equation 1 to 3 of GRU as follows.
where $\phi _z, \phi _r, \phi $ are three different CNN layers, i.e., the weights are not shared. When the weights share across these CNNs, the deep fusion will be degraded to shallow fusion.
<<</Deep Fusion>>>
<<<Deep-Enhanced Fusion>>>
In shallow fusion and deep fusion, we used the convolutional operation to summarize the context. However, one drawback of them is that the original word embedding might be blurred by blending the words around it, i.e., applying the convolutional operation on its context.
For better modeling the original word and its context, we enhanced the deep fusion model with original word embedding information, with an intuition of “enriching word representation with contextual information while preserving its basic meaning”. Figure FIGREF17 illustrates our motivations.
Formally, the Equation 9 to 11 can be further rewritten into
where we add original word embedding $e_t$ after the CNN operation, to “enhance” the original word information while not losing the contextual information that has learned from CNNs.
<<</Deep-Enhanced Fusion>>>
<<</Contextual Recurrent Unit>>>
<<</Our approach>>>
<<<Applications>>>
The proposed CRU model is a general neural recurrent unit, so we could apply it to various NLP tasks. As we wonder whether the CRU model could give improvements in both sentence-level modeling and document-level modeling tasks, in this paper, we applied the CRU model to two NLP tasks: sentiment classification and cloze-style reading comprehension. In the sentiment classification task, we build a simple neural model and applied our CRU. In the cloze-style reading comprehension task, we first present some modifications to a recent reading comprehension model, called AoA Reader BIBREF10, and then replace the GRU part by our CRU model to see if our model could give substantial improvements over strong baselines.
<<<Sentiment Classification>>>
In the sentiment classification task, we aim to classify movie reviews, where one movie review will be classified into the positive/negative or subjective/objective category. A general neural network architecture for this task is depicted in Figure FIGREF20.
First, the movie review is transformed into word embeddings. And then, a sequence modeling module is applied, in which we can adopt LSTM, GRU, or our CRU, to capture the inner relations of the text. In this paper, we adopt bidirectional recurrent units for modeling sentences, and then the final hidden outputs are concatenated. After that, a fully connected layer will be added after sequence modeling. Finally, the binary decision is made through a single $sigmoid$ unit.
As shown, we employed a straightforward neural architecture to this task, as we purely want to compare our CRU model against other sequential models. The detailed experimental result of sentiment classification will be given in the next section.
<<</Sentiment Classification>>>
<<<Reading Comprehension>>>
Besides the sentiment classification task, we also tried our CRU model in cloze-style reading comprehension, which is a much complicated task. In this paper, we strengthened the recent AoA Reader BIBREF10 and applied our CRU model to see if we could obtain substantial improvements when the baseline is strengthened.
<<<Task Description>>>
The cloze-style reading comprehension is a fundamental task that explores relations between the document and the query. Formally, a general cloze-style query can be illustrated as a triple $\langle {\mathcal {D}}, {\mathcal {Q}}, {\mathcal {A}} \rangle $, where $\mathcal {D}$ is the document, $\mathcal {Q}$ is the query and the answer $\mathcal {A}$. Note that the answer is a single word in the document, which requires us to exploit the relationship between the document and query.
<<</Task Description>>>
<<<Modified AoA Reader>>>
In this section, we briefly introduce the original AoA Reader BIBREF10, and illustrate our modifications. When a cloze-style training triple $\langle \mathcal {D}, \mathcal {Q}, \mathcal {A} \rangle $ is given, the Modified AoA Reader will be constructed in the following steps. First, the document and query will be transformed into continuous representations with the embedding layer and recurrent layer. The recurrent layer can be the simple RNN, GRU, LSTM, or our CRU model.
To further strengthen the representation power, we show a simple modification in the embedding layer, where we found strong empirical results in performance. The main idea is to utilize additional sparse features of the word and add (concatenate) these features to the word embeddings to enrich the word representations. The additional features have shown effective in various models BIBREF7, BIBREF17, BIBREF11. In this paper, we adopt two additional features in document word embeddings (no features applied to the query side).
$\bullet $ Document word frequency: Calculate each document word frequency. This helps the model to pay more attention to the important (more mentioned) part of the document.
$\bullet $ Count of query word: Count the number of each document word appeared in the query. For example, if a document word appears three times in the query, then the feature value will be 3. We empirically find that instead of using binary features (appear=1, otherwise=0) BIBREF17, indicating the count of the word provides more information, suggesting that the more a word occurs in the query, the less possible the answer it will be. We replace the Equation 16 with the following formulation (query side is not changed),
where $freq(x)$ and $CoQ(x)$ are the features that introduced above.
Other parts of the model remain the same as the original AoA Reader. For simplicity, we will omit this part, and the detailed illustrations can be found in BIBREF10.
<<</Modified AoA Reader>>>
<<</Reading Comprehension>>>
<<</Applications>>>
<<<Experiments: Sentiment Classification>>>
<<<Experimental Setups>>>
In the sentiment classification task, we tried our model on the following public datasets.
[leftmargin=*]
MR Movie reviews with one sentence each. Each review is classified into positive or negative BIBREF18.
IMDB Movie reviews from IMDB website, where each movie review is labeled with binary classes, either positive or negative BIBREF19. Note that each movie review may contain several sentences.
SUBJ$^1$ Movie review labeled with subjective or objective BIBREF20.
The statistics and hyper-parameter settings of these datasets are listed in Table TABREF33.
As these datasets are quite small and overfit easily, we employed $l_2$-regularization of 0.0001 to the embedding layer in all datasets. Also, we applied dropout BIBREF21 to the output of the embedding layer and fully connected layer. The fully connected layer has a dimension of 1024. In the MR and SUBJ, the embedding layer is initialized with 200-dimensional GloVe embeddings (trained on 840B token) BIBREF22 and fine-tuned during the training process. In the IMDB condition, the vocabulary is truncated by descending word frequency order. We adopt batched training strategy of 32 samples with ADAM optimizer BIBREF23, and clipped gradient to 5 BIBREF24. Unless indicated, the convolutional filter length is set to 3, and ReLU for the non-linear function of CNN in all experiments. We use 10-fold cross-validation (CV) in the dataset that has no train/valid/test division.
<<</Experimental Setups>>>
<<<Results>>>
The experimental results are shown in Table TABREF35. As we mentioned before, all RNNs in these models are bi-directional, because we wonder if our bi-CRU could still give substantial improvements over bi-GRU which could capture both history and future information. As we can see that, all variants of our CRU model could give substantial improvements over the traditional GRU model, where a maximum gain of 2.7%, 1.0%, and 1.9% can be observed in three datasets, respectively. We also found that though we adopt a straightforward classification model, our CRU model could outperform the state-of-the-art systems by 0.6%, 0.7%, and 0.8% gains respectively, which demonstrate its effectiveness. By employing more sophisticated architecture or introducing task-specific features, we think there is still much room for further improvements, which is beyond the scope of this paper.
When comparing three variants of the CRU model, as we expected, the CRU with deep-enhanced fusion performs best among them. This demonstrates that by incorporating contextual representations with original word embedding could enhance the representation power. Also, we noticed that when we tried a larger window size of the convolutional filter, i.e., 5 in this experiment, does not give a rise in the performance. We plot the trends of MR test set accuracy with the increasing convolutional filter length, as shown in Figure FIGREF36.
As we can see that, using a smaller convolutional filter does not provide much contextual information, thus giving a lower accuracy. On the contrary, the larger filters generally outperform the lower ones, but not always. One possible reason for this is that when the filter becomes larger, the amortized contextual information is less than a smaller filter, and make it harder for the model to learn the contextual information. However, we think the proper size of the convolutional filter may vary task by task. Some tasks that require long-span contextual information may benefit from a larger filter.
We also compared our CRU model with related works that combine CNN and RNN BIBREF3, BIBREF4, BIBREF5. From the results, we can see that our CRU model significantly outperforms previous works, which demonstrates that by employing deep fusion and enhancing the contextual representations with original embeddings could substantially improve the power of word representations.
On another aspect, we plot the trends of IMDB test set accuracy during the training process, as depicted in Figure FIGREF37. As we can see that, after iterating six epochs of training data, all variants of CRU models show faster convergence speed and smaller performance fluctuation than the traditional GRU model, which demonstrates that the proposed CRU model has better training stability.
<<</Results>>>
<<</Experiments: Sentiment Classification>>>
<<<Experiments: Reading Comprehension>>>
<<</Experiments: Reading Comprehension>>>
<<<Qualitative Analysis>>>
In this section, we will give a qualitative analysis on our proposed CRU model in the sentiment classification task. We focus on two categories of the movie reviews, which is quite harder for the model to judge the correct sentiment. The first one is the movie review that contains negation terms, such as “not”. The second type is the one contains sentiment transition, such as “clever but not compelling”. We manually select 50 samples of each category in the MR dataset, forming a total of 100 samples to see if our CRU model is superior in handling these movie reviews. The results are shown in Table TABREF45. As we can see that, our CRU model is better at both categories of movie review classification, demonstrating its effectiveness.
Among these samples, we select an intuitive example that the CRU successfully captures the true meaning of the sentence and gives the correct sentiment label. We segment a full movie review into three sentences, which is shown in Table TABREF46.
Regarding the first and second sentence, both models give correct sentiment prediction. While introducing the third sentence, the GRU baseline model failed to recognize this review as a positive sentiment because there are many negation terms in the sentence. However, our CRU model could capture the local context during the recurrent modeling the sentence, and the phrases such as “not making fun” and “not laughing at” could be correctly noted as positive sentiment which will correct the sentiment category of the full review, suggesting that our model is superior at modeling local context and gives much accurate meaning.
<<</Qualitative Analysis>>>
<<<Conclusion>>>
In this paper, we proposed an effective recurrent model for modeling sequences, called Contextual Recurrent Units (CRU). We inject the CNN into GRU, which aims to better model the local context information via CNN before recurrently modeling the sequence. We have tested our CRU model on the cloze-style reading comprehension task and sentiment classification task. Experimental results show that our model could give substantial improvements over various state-of-the-art systems and set up new records on the respective public datasets. In the future, we plan to investigate convolutional filters that have dynamic lengths to adaptively capture the possible spans of its context.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Related Works, Conclusion"
],
"type": "disordered_section"
}
|
2001.11899
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
An efficient automated data analytics approach to large scale computational comparative linguistics
<<<Abstract>>>
This research project aimed to overcome the challenge of analysing human language relationships, facilitate the grouping of languages and formation of genealogical relationship between them by developing automated comparison techniques. Techniques were based on the phonetic representation of certain key words and concept. Example word sets included numbers 1-10 (curated), large database of numbers 1-10 and sheep counting numbers 1-10 (other sources), colours (curated), basic words (curated). ::: To enable comparison within the sets the measure of Edit distance was calculated based on Levenshtein distance metric. This metric between two strings is the minimum number of single-character edits, operations including: insertions, deletions or substitutions. To explore which words exhibit more or less variation, which words are more preserved and examine how languages could be grouped based on linguistic distances within sets, several data analytics techniques were involved. Those included density evaluation, hierarchical clustering, silhouette, mean, standard deviation and Bhattacharya coefficient calculations. These techniques lead to the development of a workflow which was later implemented by combining Unix shell scripts, a developed R package and SWI Prolog. This proved to be computationally efficient and permitted the fast exploration of large language sets and their analysis.
<<</Abstract>>>
<<<Introduction>>>
The need to uncover presumed underlying linguistic evolutionary principles and analyse correlation between world's languages has entailed this research. For centuries people have been speculating about the origins of language, however this subject is still obscure. Non-automated linguistic analysis of language relationships has been complicated and very time-consuming. Consequently, this research aims to apply a computational approach to compare human languages. It is based on the phonetic representation of certain key words and concept. This comparison of word similarity aims to facilitate the grouping of languages and the analysis of the formation of genealogical relationship between languages.
This report contains a thorough description of the proposed methods, developed techniques and discussion of the results. During this projects several collections of words were gathered and examined, including colour words and numbers. The methods included edit distance, phonetic substitution table, hierarchical clustering with a cut and other analysis methods. They all aimed to provide an insight regarding both technical data summary and its visual representation.
<<</Introduction>>>
<<<Background>>>
<<<Human languages>>>
For centuries, people have speculated over the origins of language and its early development. It is believed that language first appeared among Homo Sapiens somewhere between 50,000 and 150,000 years ago BIBREF0. However, the origins of human language are very obscure.
To begin with, it is still unknown if the human language originated from one original and universal Proto-Language. Alfredo Trombetti made the first scientific attempt to establish the reality of monogenesis in languages. His investigation concluded that it was spoken between 100,000 and 200,000 years ago, or close to the first emergence of Homo Sapiens BIBREF1. However it was never accepted comprehensively. The concept of Proto-Language is purely hypothetical and not amenable to analysis in historical linguistics.
Furthermore, there are multiple theories of how language evolved. These could be separated into two distinctly different groups.
Firstly, some researchers claim that language evolved as a result of other evolutionary processes, essentially making it a by-product of evolution, selection for other abilities or as a consequence of yet unknown laws of growth and form. This theory is clearly established in Noam Chomsky BIBREF2 and Stephen Jay Gould's work BIBREF3. Both scientists hypothesize that language evolved together with the human brain, or with the evolution of cognitive structures. They were used for tool making, information processing, learning and were also beneficial for complex communication. This conforms with the theory that as our brains became larger, our cognitive functions increased.
Secondly, another widely held theory is that language came about as an evolutionary adaptation, which is when a population undergoes a change in process over time to survive better. Scientists Steven Pinker and Paul Bloom in “Natural Language and Natural Selection” BIBREF4 theorize that a series of calls or gestures evolved over time into combinations, resulting in complex communication.
Today there are 7,111 distinct languages spoken worldwide according to the 2019 Ethnologue language database. Many circumstances such as the spread of old civilizations, geographical features, and history determine the number of languages spoken in a particular region. Nearly two thirds of languages are from Asia and Africa.
The Asian continent has the largest number of spoken languages - 2,303. Africa follows closely with 2,140 languages spoken across continent. However, given the population of certain areas and colonial expansion in recent centuries, 86 percent of people use languages from Europe and Asia. It is estimated that there is around 4.2 billion speakers of Asian languages and around 1.75 billion speakers of European languages.
Moreover, Pacific languages have approximately 1,000 speakers each on average, but altogether, they represent more than a third of our world’s languages. Papua New Guinea is the most linguistically diverse country in the world. This is possibly due to the effect of its geography imposing isolation on communities. It has over 840 languages spoken, with twelve of them lacking many speakers. It is followed by Indonesia, which has 709 languages spoken across the country.
<<<Indo-European languages and Kurgan Hypothesis>>>
Indo-European languages is a language family that represents most of the modern languages of Europe, as well as specific languages of Asia. Indo-European language family consist of several hundreds of related languages and dialects. Consequently, it was an interest of the linguists to explore the origins of the Indo-European language family.
In the mid-1950s, Marija Gimbutas, a Lithuanian-American archaeologist and anthropologist, combined her substantial background in linguistic paleontology with archaeological evidence to formulate the Kurgan hypothesis BIBREF5. This hypothesis is the most widely accepted proposal to identify the homeland of Proto-Indo-European (PIE) (ancient common ancestor of the Indo-European languages) speakers and to explain the rapid and extensive spread of Indo-European languages throughout Europe and Asia BIBREF6 BIBREF7. The Kurgan hypothesis proposes that the most likely speakers of the Proto-Indo-European language were people of a Kurgan culture in the Pontic steppe, by the north side of the Black Sea. It also divides the Kurgan culture into four successive stages (I, II, III, IV) and identifies three waves of expansions (I, II, III). In addition, the model suggest that the Indo-European migration was happening from 4000 to 1000 BC. See figure FIGREF4 for visual illustration of Indo-European migration.
Today there are approximately 445 living Indo-European languages, which are spoken by 3.2 billion people, according to Ethnologue. They are divided into the following groups: Albanian, Armenian, Baltic, Slavic, Celtic, Germanic, Hellenic, Indo-Iranian and Italic (Romance) FIGREF3 BIBREF8.
<<</Indo-European languages and Kurgan Hypothesis>>>
<<<Brittonic languages>>>
Brittonic or British Celtic languages derive from the Common Brittonic language, spoken throughout Great Britain south of the Firth of Forth during the Iron Age and Roman period. They are classified as Indo-European Celtic languages BIBREF10. The family tree of Brittonic languages is showed in Table TABREF6. Common Brittonic is ancestral to Western and Southwestern Brittonic. Consequently, Cumbric and Welsh, which is spoken in Wales, derived from Western Brittonic. Cornish and Breton, spoken in Cornwall and Brittany, respectively, originated from Southwestern side.
Today Welsh, Cornish and Breton are still in use. However, it is worth to point out that Cornish is a language revived by second-language learners due to the last native speakers dying in the late 18th century. Some people claimed that the Cornish language is an important part of their identity, culture and heritage, and a revival began in the early 20th century. Cornish is currently a recognised minority language under the European Charter for Regional or Minority Languages.
<<</Brittonic languages>>>
<<<Sheep Counting System>>>
Brittonic Celtic language is an ancestor to the number names used for sheep counting BIBREF11 BIBREF12. Until the Industrial Revolution, the use of traditional number systems was common among shepherds, especially in the fells of the Lake District. The sheep-counting system was referred to as Yan Tan Tethera. It was spread across Northern England and in other parts of Britain in earlier times. The number names varied according to dialect, geography, and other factors. They also preserved interesting indications of how languages evolved over time.
The word “yan” or “yen” meaning “one”, in some northern English dialects represents a regular development in Northern English BIBREF13. During the development the Old English long vowel // <ā> was broken into /ie/, /ia/ and so on. This explains the shift to “yan” and “ane” from the Old English ān, which is itself derived from the Proto-Germanic “*ainaz” BIBREF14.
In addition, the counting system demonstrates a clear connection with counting on the fingers. Particularly after numbers reach 10, as the best known examples are formed according to this structure: 1 and 10, 2 and 10, up to 15, and then 1 and 15, 2 and 15, up to 20. The count variability would end at 20. It might be due to the fact, that the shepherds, on reaching 20, would transfer a pebble or marble from one pocket to another, so as to keep a tally of the number of scores.
<<</Sheep Counting System>>>
<<</Human languages>>>
<<</Background>>>
<<<Aims and Objectives>>>
<<<Overall Aim>>>
The aim of this research was to develop computational methods to compare human languages based on the phonetic form of single words (i.e. not exploiting grammar). This comparison of word similarity aims to facilitate the grouping of languages, the identification of the the presumed underlying linguistic evolutionary principles and the analysis of the formation of genealogical relationship between languages.
<<</Overall Aim>>>
<<<Specific Objectives>>>
Devise a way to encode the phonetic representation of words, using:
an in-house encoding,
an IPA (International Phonetic Alphabet).
Develop methods to analyze the comparative relationships between languages using: descriptive and inferential statistics, clustering, visualisation of the data, and analysis of the results.
Implement a repeatable process for running the analysis methods with new data.
Analyse the correlation between geographical distance and language similarity (linguistic distance), and investigate if it explains the evolutionary distance.
Examine which words exhibit more or less variation and the likely causes of it.
Explore which words are preserved better across the same language group and possible reasons behind it.
Explore which language group preserves particular words more in comparison to others and potential reasons behind it.
Determine if certain language groups are correct and exploit the possibility of forming new ones.
<<</Specific Objectives>>>
<<</Aims and Objectives>>>
<<<Data>>>
<<<Language files>>>
Language file or database is a set of languages, each of which is associated with an ordered list of words. All lists of words for a particular data set have the same length. For example:
numbers(romani,[iek,dui,trin,shtar,panj,shov,efta,oksto,ena,desh]).
numbers(english,[wun,too,three,foor,five,siks,seven,eit,nine,ten]).
numbers(french,[un,de,troi,katre,sink,sis,set,wuit,neuf,dis]).
Words and languages are encoded in this format for later use of Prolog. In Prolog each “numbers” line is a fact, which has 2 arguments; the first is the language name and the second is a list (indicated in between square brackets) of words. Words can be written down in their original form or encoded phonetically (as shown in the example). Where synonyms for a word are known, then the word itself is represented by a list of the synonym words. In the example below, Lithuanian, Russian and Italian have two words for the English `blue':
words(english,[black,white,red,yellow,blue,green]).
words(lithuanian,[juoda,balta,raudona,geltona,[melyna,zhydra],zhalia]).
words(russian,[chornyj,belyj,krasnyj,zholtyj,[sinij,goluboj],zeljonyj]).
words(italian,[nero,bianco,rosso,giallo,[blu,azzurro],verde]).
The main focus of this research was exploring words phonetically. Consequently, special encoding was used. It consisted of encoding phonemes by using only one letter; incorporating capital letters for encoding different sounds (See table TABREF21).
Table TABREF22 summarises the language files that are obtained at the moment.
<<</Language files>>>
<<<Sheep>>>
<<<Sheep counting words>>>
Sheep counting numbers were extracted from “Yan Tan Tethera” BIBREF12 page on Wikipedia and placed in a Prolog database. Furthermore, data was encoded phonetically using the set of rules provided by Prof. David Gilbert.
In the given source, number sets ranged from 1-3 to 1-20 for different dialects. The initial step was to reduce the size of the data to sets of numbers 1-10. This way aiming:
to have Prolog syntax without errors (avoided “-”, “ ” as they were common symbols after numbers reached 10);
to avoid the effects of different methods of forming and writing down numbers higher than 10. (Usually they were formed from numbers 1-10 and a base. However, they were written in a different order, making the comparison inefficient.)
In addition, the Wharfedale dialect was removed since only numbers 1-3 were provided; the Weardale dialect was eliminated as it had a counting system with base 5. Consequently, the final version of sheep counting numbers database consisted of 23 observations (dialects) with numbers 1-10.
<<</Sheep counting words>>>
<<<Geographical data>>>
In order to enable the analysis of linguistic and geographical distance relationship, a geographical distance database was created. It was done by firstly creating a personalized Google Map with 23 pins, noting the places of different dialects (they were located approximately in the middle of the area) (Figure: FIGREF28). Subsequently, pairwise distances were calculated between all of them (taking walking distance) and added to the database for further use.
<<</Geographical data>>>
<<<Analysis of average and subset linguistic distance>>>
After applying functions “mean_SD” (Figure: FIGREF72) and “densityP” (Figure: FIGREF73) to the linguistic distances of every word (numbers 1 to 10) in R, the following observations were made. First of all, the most preserved number across all dialects was “10” with distance mean 0.109 and standard deviation 0.129. Numbers “1”, “2”, “3”, “4” had comparatively small distances, which might be the result of being used more frequently. On the other hand, number “6” showed more dissimilarities between dialects than other numbers. The mean score was 0.567 and standard deviation - 0.234. The product scores of mean and standard deviation helped to evaluate both at the same time. Moreover, density plots showed significant fluctuation and tented to have a few peaks. But in general, conformed with the statistics provided by “mean_SD”.
<<</Analysis of average and subset linguistic distance>>>
<<</Sheep>>>
<<<Colours>>>
Colour words were extracted from “Colour words in many languages” BIBREF15 page on Omniglot, collected from people and dictionaries. In addition, data was encoded phonetically using the set of rules provided by Prof. David Gilbert.
The latest version of the database consisted of 42 different languages, each containing 6 colours: black, white, red, yellow, blue, green. For the purposes of analysis the following groups were created:
All languages - “ColoursAll” (42 languages)
Indo-European languages - “ColoursIE” (39 languages)
Germanic languages - “ColoursPGermanic” (10 languages)
Romance languages - “ColoursPRomance” (11 languages)
Germanic and Romance languages - “ColoursPG_R” (21 languages)
<<<Mean and Standard Deviation>>>
When examining the data calculated for “ColoursAll” none of the colours showed a clear tendency to be more preserved than others (Figure: FIGREF83). All colours had large distances and comparatively small standard deviation when compared with other groups. Small standard deviation was most likely the result of most of the distances being large.
Indo-European language group scores were similar to “ColoursAll”, exhibiting slightly larger standard deviation (Figure: FIGREF84). Conclusion could be drawn that words for color “Red” are more similar in this group. The mean score of linguistic distances was 0.61, and SD was equal to 0.178, when average mean was 0.642 and SD 0.212. However, no colour stood out distinctly.
Germanic and Romance language groups revealed more significant results. Germanic languages preserved the colour “Green” considerably (Figure: FIGREF85). The mean and SD was 0.168 and 0.129, when on average mean was reaching 0.333 and SD 0.171. In addition, the colour “Blue” had favorable scores as well - mean was 0.209 and SD was 0.106. Furthermore, Romance languages demonstrated slightly higher means and standard deviations, on average reaching 0.45 and 0.256 (Figure: FIGREF86). Similarly to Germanic, the most preserved colour word in Romance languages was “Green” with a mean of 0.296 and SD of 0.214. It was followed by words for “Black” and then for “Blue”, both being quite similar.
<<</Mean and Standard Deviation>>>
<<<Density Plots>>>
Density plots of all languages and Indo-European languages were similar: both having multiple peaks with the most density around scores of 0.75 (big linguistic distance). Moreover, Germanic languages density distribution consisted of two peaks for words “White”, “Blue” and “Green” (Figure: FIGREF88). This could possibly be the result of certain weighting in the Phonetic Substitution Table or indicate possible further grouping of languages. The color “Black” had more normal distribution and smoother bell shape compared to others. Furthermore, Romance languages also obtained density plots with two peaks for words “White”, “Yellow”, “Blue” (Figure: FIGREF89). In contrast, “Black”, “Red” and “Green” distributions were quite smooth.
In order to experiment how the Phonetic Substitution Table affects the linguistic distances, “densityP” function was applied to the linguistic distances calculated with “GabyTable” substitution table. The aim was to eliminate the two peaks in the Germanic language group for word “Green”. In Germanic languages word for green tended to begin with either “gr” or “khr” (encoded as “Kr”) - both sounding similar phonetically. However, in the original substitution table, a weight for changing “K”(kh) to “g” (and the other way around) did not exist. Consequently, a new table was implemented with this substitution. This change resulted in notably smaller linguistic distances - the mean for the word “Green” was 0.099. However, it did not solve the occurrence of two peaks. The density of “Green” again had two main peaks, but differently distributed compared to the previous case.
<<</Density Plots>>>
<<<Bhattacharya Coefficients>>>
Bhattacharya coefficients were calculated within each group for different pairs of colours. This helped to evaluate which colours were closer in distribution. In addition, hierarchical clustering was done with Bhattacharya coefficients (find the dendrograms in the Appendix SECREF123). However, the potential meaning behind the results was not fully examined.
Another potential use of Bhattacharya coefficients is their application to the same word from a different language group. As a result, the preservation of particular words can be analysed across language groups, enabling to compare and evaluate potential reasons behind it.
<<</Bhattacharya Coefficients>>>
<<</Colours>>>
<<<IPA>>>
“Automatic Phonemic Transcriber” BIBREF16 was used to create 3 IPA encoded databases:
“BasicWords” - words in their original form were taken from Prof. David Gilbert's database for basic words (including: sun, moon, rain, water, fire, man, woman, mother, father, child, yes, no, blood).
“Numbers” - numbers from 1-10 in their original form were taken from Prof. David Gilbert's small database of numbers.
“Colours” - words were taken from the above mentioned database (including words: black, white, red, yellow, blue, green).
Each of the above mentioned databases consisted of 3 languages: English, Danish and German (these were the languages the Automatic Phonemic Transcriber provided) all encoded in IPA.
As the research progressed, the difficulty of obtaining IPA encoding for different languages was faced. This study could not find a cross-linguistic IPA dictionary that included more than 3 languages. Consequently, the question of its existence was raised.
<<</IPA>>>
<<</Data>>>
<<<Methodology>>>
There are two main processes to be carried out.
The first process (Figure: FIGREF43) aims to analyse a databases of words; explore which words exhibit more or less variation, which words are more preserved; examine how languages could be grouped based on linguistic distances of words.
It begins with the calculation of pairwise linguistic distances for the given database of words. A Phonetic Substitution Table is used to assign weights during the calculation and could possibly be modified. The result is a new distance table which is analysed in the following ways:
Performing “densityP” function. The outcome is density plots for every word of a database.
Performing Hierarchical clustering. After, the “Best cut” is determined, which is either the best Silhouette value after calculation of all possible cases, or a forced number K which is a number of words per language in the language file
Calculating Bhattacharya coefficients.
Performing “mean_SD” function.
The second process (Figure: FIGREF44) targets to investigate the relationship between two sets of distance data. In this research, it was applied to analyse the relationship between linguistic and geographical distances.
It starts with producing two pairwise distance tables: one of them is calculated geographical distances, another one is calculated linguistic distances. Then the data from both tables is combined into a data frame for regression analysis in R. The outcome is an object of the class “lm” (result of R function “lm” being used), that is used for data analysis, and a scatter plot with a regression line for visual analysis.
Both processes have been automated, see Section SECREF66.
<<</Methodology>>>
<<<Methods>>>
<<<Edit Distance>>>
For the purposes of this research Edit distance (a measure in computer science and computational linguistics for determining the similarity between 2 strings) was calculated based on Levenshtein distance metric. This metric between two strings is the minimum number of single-character edits, operations including: insertions, deletions or substitutions.
The Levenshtein distance between two strings a,b (of length $\mid a\mid $ and $\mid b\mid $ respectively) is given by $lev_{a,b}(\mid a \mid , \mid b \mid )$ where
where $1_{(a_{i}\ne b_{j})}$ is the indicator function equal to 0 when $a_{i}=b_{j}$ and equal to 1 otherwise. A normalised edit distance between two strings can be computed by
Edit distance was implemented by Prof. David Gilbert using dynamic programming in SWI Prolog BIBREF17. The program was used to compare two words with the same meaning from different languages. When pairwise comparing two words where either one or both comprise synonyms, all the alternatives for each word one one language are compared with the corresponding (set) of words in the other language, and the closest match is selected. In addition, all to all comparisons were made, i.e. edit distance was calculated for words having different meaning as well. Finally, the edit distance for two languages represented by two lists of equal length of corresponding words was computed by taking the average of the edit distance for each (corresponding) pair of words.
An example of pairwise alignments is for the pair of words overa-hofa, where 3 alignments are produced with the use of gap penalty $=1$ and substitution penalties $f \leftrightarrow v = 0.2$, $e \leftrightarrow o = 0.2$ and all other mismatches 1:
[[-,h],[o,o],[v,f],[e,-],[r,-],[a,a]]
[[o,-],[v,h],[e,o],[r,f],[a,a]]
[[o,h],[v,-],[e,o],[r,f],[a,a]]
each with the raw edit distance of 3.2, and the normalised edit distance of
For the sake of clarity we can write the first alignment for example as
where only 3 letters are directly aligned.
<<</Edit Distance>>>
<<<Phonetic Substitution Table>>>
In order to give a specified weight for different operations (insertion, deletion and substitution) Phonetic Substitution Table was created by incorporating Grimm's law BIBREF18 and extending it in-house.
Grimm's Law, principle of relationships in Indo-European languages, describes a process of the regular shifting of consonants in groups. It consist of 3 phases in terms of chain shift BIBREF19.
Proto-Indo-European voiceless stops change into voiceless fricatives.
Proto-Indo-European voiced stops become voiceless stops.
Proto-Indo-European voiced aspirated stops become voiced stops or fricatives.
This is an abstract representation of the chain shift:
$bh > b > p > $
$dh > d > t > $
$gh > g > k > x$
$gwh > gw > kw > xw$
Figure FIGREF54 illustrates how further consonant shifting following Grimm's law affected words from different languages BIBREF20.
Phonetic substitution table was extended in-house by adding more shifts. In addition, it was also written in the way to work with the special encoding described in SECREF20 section. Find the full table “editable” in Appendix SECREF11.
Another phonetic substitution table, called “editableGaby”, was made (See Appendix SECREF11). It was extended by adding pairs like “dzh” and “zh”; “dzh” and “ch”; “kh” and “g”; as well as “H”(sound of e.g. spannish/portuguese “j”) with “kh”, “g”, “k”, “h”. In addition, some of the weights were changed for certain pairs for experimental purposes.
<<</Phonetic Substitution Table>>>
<<<Hierarchical Clustering>>>
<<<Using the OC program>>>
The OC program BIBREF21 is general purpose hierarchical cluster analysis program. It outputs a list of the clusters and optionally draws a dendrogram in PostScript. It requires complete upper diagonal distance or similarity matrix as an input.
<<</Using the OC program>>>
<<<Using R>>>
Hierarchical clustering in R was performed by incorporating clustering together with Silhouette value calculation and cut performance.
In order to fulfill agglomerative hierarchical clustering more efficiently, we created a set of functions in R:
“sMatrix” - Makes a symmetric matrix from a specified column. The function takes a specifically formatted data frame as an input and returns a new data frame. Having a symmetric matrix is necessary for “silhouetteV” and “hcutVisual” functions.
“silhouetteV” - Calculates Silhouette values with “k” value varying from 2 to n-1 (n being the number of different languages/number of rows/number of columns in a data frame). The function takes a symmetric distance matrix as an input and returns a new data frame containing all Silhouette values.
“hcutVisual” - Performs hierarchical clustering and makes a cut with the given K value. Makes Silhouette plot, Cluster plot and dendrogram. Returns a “hcut” object from which cluster assignment, silhouette information, etc. can be extracted.
It is important to note that K-Means clustering was not performed as the algorithm is meant to operate over a data matrix, not a distance matrix.
<<</Using R>>>
<<</Hierarchical Clustering>>>
<<<Further analysis with R>>>
Another set of functions was created to analyse collected data further. They target to ease the comparison of the mean, standard deviation, Bhattacharya coefficient within the words or language groups. Including:
“mean_SD” - Calculates mean, standard deviation, product of the mean and the SD multiplication for every column of the input. Visualises all three values for each column and places it in one plot, which is returned.
“densityP” - Makes a density plot for every column of the input and puts it in one plot, which is returned.
“tscore” - Calculates t-score for every value in the given data frame. (T-score is a standard score Z shifted and scaled to have a mean of 50 and a standard deviation of 10)
“bhatt” - Calculates Bhattacharya coefficient (the probability of the two distributions being the same) for every pair of columns in the data frame. The function returns a new data frame.
<<</Further analysis with R>>>
<<<Process automation>>>
In order to optimise and perform analysis in the most time-efficient manner processes of comparing languages were automated. It was done by creating two shell scripts and an R script for each of them.
The first shell script named “oc2r_hist.sh” was made to perform hierarchical clustering with the best silhouette value cut. This script takes a language database as an input and performs pairwise distance calculation. It then calls “hClustering.R” R script, which reads in the produced OC file, performs hierarchical clustering and calculates all possible silhouette values. Finally, it makes a cut with the number of clusters, which provides the highest silhouette value. To enable this process the R script was written by incorporating the functions described in section SECREF57. The outcome of this program is a table of clusters, a dendrogram, clusters' and silhouette plots.
The second shell script called “wordset_make_analyse.sh” was made to perform calculations of mean, standard deviation, Bhattacharya scores and produce density plots. This script takes a language database as an input and performs pairwise distance calculations for each word of the database. It then calls “rAnalysis.R” R script, which reads in the produced OC file and performs further calculations. Firstly, it calculates mean, standard deviation and the product of both of each word and outputs a histogram and a table of scores. Secondly, it produces density plots of each word. Finally, it converts scores into T-Scores and calculates Bhattacharya coefficient for every possible pair of words. It then outputs a table of scores. To enable this process the R script was written by incorporating the functions described in section SECREF61.
Finally, both of the scripts were combined to minimise user participation.
<<</Process automation>>>
<<</Methods>>>
<<<Results>>>
<<<Hierarchical clustering>>>
Hierarchical clustering was performed with the best Silhouette value cut (Figure FIGREF76). The Silhouette value suggested making 9 clusters. In this grouping, the most interesting observation was that Welsh, Breton and Cornish languages were placed together. It conforms with the fact that all 3 languages descended directly from the Common Brittonic language spoken throughout Britain before the English language became dominant.
<<<All to all comparison analysis>>>
To enable analysis of clusters of all to all comparison, hierarchical clustering was performed. This was done by two different approaches: calculating a silhouette value and choosing the number of clusters accordingly; forcing a function to make 10 clusters due to having numbers from 1 to 10 in the sheep counting database.
By using function “silhouetteV” silhouette values were calculated for all possible $k$ values. The returned data frame indicated the best number of clusters being 70 (see Appendix SECREF120 for dendrogram and cluster plot). The suggested clusters were not distinguished with very high clarity in terms of numbers 1-10 perfectly, but they were comparatively good. A pattern that numbers, which had lower mean and standard deviation scores, would result in purer clusters was noticed. Clusters of numbers “1”, “2”, “3”, “4”, “5” and “10” were not as mixed as “6”, “7”, “8”, “9”.
Another way of looking at all to all comparison data was by producing 10 clusters. It was done by using “hcutVisual” and “cPurity” function (see Appendix SECREF120 cluster plot). The results showed high impurities of clusters (Figure: FIGREF78). Two out of ten clusters were pure, both containing number “5”. Another relatively pure cluster was composed of number “10” and two entries of number “2”. The rest consisted of up to 7 different numbers. This shows that sheep counting numbers in different dialects are too different to form 10 clusters containing each number. However, considering the possibility that dialects were grouped and clustering was performed to the smaller groups, they would have reasonably pure clusters. Exploring this grouping options could be a subject for further work.
<<</All to all comparison analysis>>>
<<<Linguistic and Geographical distance relationship>>>
In order to investigate the correlation between linguistic and geographical distance, “lm” function was performed and a scatter plot was created. The regression line in the scatter plot suggested that the relationship existed. However, the R-squared value, extracted from the “lm” object, was equal to 0.131. This indicated that relationship existed, but was not significant.
One assumption made was that Cornish, Breton and Welsh dialects might have had a weakening effect on the relationship, since they had large linguistic distances compared to other dialects. However this assumption could not be validated as the correlation was less significant after eliminating them. This highlights that although these dialects had large linguistic distance scores, they also had big geographical distances that do not contradict the relationship.
In addition, comparison was done between linguistic distance and
$Log_{10}(\text{GeographicalDistance})$. This resulted in an even weaker relationship with R-squared being 0.097.
<<</Linguistic and Geographical distance relationship>>>
<<</Hierarchical clustering>>>
<<<Small Numbers>>>
<<<All to all comparison>>>
Analysis was carried out in two ways. First of all, hierarchical clustering was performed with the best silhouette value cut. For this data set best silhouette value was 0.48 and it suggested making 329 clusters. Clusters did not exhibit high purity. However, the ones that did quite clearly corresponded to unique subgroups of language families.
Another way of looking at all to all comparison data was by producing 10 clusters. The anticipated outcome was members being distinguished by numbers, forming 10 clean clusters. However, all the clusters were very impure and consisting multiple different numbers. This might be due to different languages having phonetically similar words for different words, in this case.
All to all pairwise comparison could be an advantageous tool when used for language family branches or smaller, but related subsets. It could validate if languages belong to a certain group.
<<</All to all comparison>>>
<<</Small Numbers>>>
<<</Results>>>
<<<Conclusions>>>
This project has aimed to develop computational methods to analyse and understand connections between human languages.
The project included collecting words from different languages in order to form new databases, forming rules for phonetic encoding of words and adjusting phonetic substitution table. Several computational methods of calculating pairwise distance between two words were taken, including average, subset and all to all words distance calculation. It was done by incorporating edit distance and phonetic substitution table, and implementing it in SWI Prolog. This was followed by detailed analysis of distance scores, which was conducted by the specific automated routines and developed R functions. They enabled performing hierarchical clustering with a cut either according to silhouette value, or to specified K value. They provided summary of mean, standard deviation and other statistics, like Bhattacharya scores. All these techniques delivered a thorough analysis of data and the automation of processes ensured they were used efficiently.
The resulting outcome of analysis of old sheep counting systems in different English dialects was the observation that numbers “1”,“2”,“3”,“4” and “10” were more uniform within different dialects than others, posing that they might have been the most frequently used ones. Analysis of all to all comparison did not provide pure clusters and shows that sheep counting numbers in different dialects are too different to form 10 clusters containing each number. This suggests that dialects should be grouped into subsets. Furthermore, hierarchical clustering with the best silhouette cut suggested the potential 9 groups, which consist members with the most similar counting words. Surprisingly, it was not entirely based on location. This corresponded with the difficulty of finding relationship between geographic and linguistic distance, the conducted tests showed it was insignificant.
Analysis of colour words revealed that within Indo-European languages words for colour red were moderately more preserved. Both Germanic and Romance language groups tended to have considerably more uniform words for green and blue colours. In addition, Romance language group preserved colour black reasonably well. Analysis of linguistic distances distribution showed multiple peaks within words for various language groups, suggesting that further language grouping could be done. Furthermore, the resulting outcome of hierarchical clustering with silhouette cut was known and officially accepted language families. Most of the clusters were subgroups of existing language families. Some of them suggested different sub-grouping according to colour words (e.g. Lithuanian was appointed to Slavic languages, while Latvian formed cluster on its own).
IPA databases resulted in the same relationships between languages as non-IPA phonetically encoded databases. However, to fully explore the potential of IPA-encoded databases they ought to be expanded and a customized weights table should be created.
In conclusion, this project resulted in creation of several felicitous computational techniques to explore many languages and their correlation all at once.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Methodology, Introduction"
],
"type": "disordered_section"
}
|
2001.11899
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
An efficient automated data analytics approach to large scale computational comparative linguistics
<<<Abstract>>>
This research project aimed to overcome the challenge of analysing human language relationships, facilitate the grouping of languages and formation of genealogical relationship between them by developing automated comparison techniques. Techniques were based on the phonetic representation of certain key words and concept. Example word sets included numbers 1-10 (curated), large database of numbers 1-10 and sheep counting numbers 1-10 (other sources), colours (curated), basic words (curated). ::: To enable comparison within the sets the measure of Edit distance was calculated based on Levenshtein distance metric. This metric between two strings is the minimum number of single-character edits, operations including: insertions, deletions or substitutions. To explore which words exhibit more or less variation, which words are more preserved and examine how languages could be grouped based on linguistic distances within sets, several data analytics techniques were involved. Those included density evaluation, hierarchical clustering, silhouette, mean, standard deviation and Bhattacharya coefficient calculations. These techniques lead to the development of a workflow which was later implemented by combining Unix shell scripts, a developed R package and SWI Prolog. This proved to be computationally efficient and permitted the fast exploration of large language sets and their analysis.
<<</Abstract>>>
<<<Introduction>>>
The need to uncover presumed underlying linguistic evolutionary principles and analyse correlation between world's languages has entailed this research. For centuries people have been speculating about the origins of language, however this subject is still obscure. Non-automated linguistic analysis of language relationships has been complicated and very time-consuming. Consequently, this research aims to apply a computational approach to compare human languages. It is based on the phonetic representation of certain key words and concept. This comparison of word similarity aims to facilitate the grouping of languages and the analysis of the formation of genealogical relationship between languages.
This report contains a thorough description of the proposed methods, developed techniques and discussion of the results. During this projects several collections of words were gathered and examined, including colour words and numbers. The methods included edit distance, phonetic substitution table, hierarchical clustering with a cut and other analysis methods. They all aimed to provide an insight regarding both technical data summary and its visual representation.
<<</Introduction>>>
<<<Background>>>
<<<Human languages>>>
For centuries, people have speculated over the origins of language and its early development. It is believed that language first appeared among Homo Sapiens somewhere between 50,000 and 150,000 years ago BIBREF0. However, the origins of human language are very obscure.
To begin with, it is still unknown if the human language originated from one original and universal Proto-Language. Alfredo Trombetti made the first scientific attempt to establish the reality of monogenesis in languages. His investigation concluded that it was spoken between 100,000 and 200,000 years ago, or close to the first emergence of Homo Sapiens BIBREF1. However it was never accepted comprehensively. The concept of Proto-Language is purely hypothetical and not amenable to analysis in historical linguistics.
Furthermore, there are multiple theories of how language evolved. These could be separated into two distinctly different groups.
Firstly, some researchers claim that language evolved as a result of other evolutionary processes, essentially making it a by-product of evolution, selection for other abilities or as a consequence of yet unknown laws of growth and form. This theory is clearly established in Noam Chomsky BIBREF2 and Stephen Jay Gould's work BIBREF3. Both scientists hypothesize that language evolved together with the human brain, or with the evolution of cognitive structures. They were used for tool making, information processing, learning and were also beneficial for complex communication. This conforms with the theory that as our brains became larger, our cognitive functions increased.
Secondly, another widely held theory is that language came about as an evolutionary adaptation, which is when a population undergoes a change in process over time to survive better. Scientists Steven Pinker and Paul Bloom in “Natural Language and Natural Selection” BIBREF4 theorize that a series of calls or gestures evolved over time into combinations, resulting in complex communication.
Today there are 7,111 distinct languages spoken worldwide according to the 2019 Ethnologue language database. Many circumstances such as the spread of old civilizations, geographical features, and history determine the number of languages spoken in a particular region. Nearly two thirds of languages are from Asia and Africa.
The Asian continent has the largest number of spoken languages - 2,303. Africa follows closely with 2,140 languages spoken across continent. However, given the population of certain areas and colonial expansion in recent centuries, 86 percent of people use languages from Europe and Asia. It is estimated that there is around 4.2 billion speakers of Asian languages and around 1.75 billion speakers of European languages.
Moreover, Pacific languages have approximately 1,000 speakers each on average, but altogether, they represent more than a third of our world’s languages. Papua New Guinea is the most linguistically diverse country in the world. This is possibly due to the effect of its geography imposing isolation on communities. It has over 840 languages spoken, with twelve of them lacking many speakers. It is followed by Indonesia, which has 709 languages spoken across the country.
<<<Indo-European languages and Kurgan Hypothesis>>>
Indo-European languages is a language family that represents most of the modern languages of Europe, as well as specific languages of Asia. Indo-European language family consist of several hundreds of related languages and dialects. Consequently, it was an interest of the linguists to explore the origins of the Indo-European language family.
In the mid-1950s, Marija Gimbutas, a Lithuanian-American archaeologist and anthropologist, combined her substantial background in linguistic paleontology with archaeological evidence to formulate the Kurgan hypothesis BIBREF5. This hypothesis is the most widely accepted proposal to identify the homeland of Proto-Indo-European (PIE) (ancient common ancestor of the Indo-European languages) speakers and to explain the rapid and extensive spread of Indo-European languages throughout Europe and Asia BIBREF6 BIBREF7. The Kurgan hypothesis proposes that the most likely speakers of the Proto-Indo-European language were people of a Kurgan culture in the Pontic steppe, by the north side of the Black Sea. It also divides the Kurgan culture into four successive stages (I, II, III, IV) and identifies three waves of expansions (I, II, III). In addition, the model suggest that the Indo-European migration was happening from 4000 to 1000 BC. See figure FIGREF4 for visual illustration of Indo-European migration.
Today there are approximately 445 living Indo-European languages, which are spoken by 3.2 billion people, according to Ethnologue. They are divided into the following groups: Albanian, Armenian, Baltic, Slavic, Celtic, Germanic, Hellenic, Indo-Iranian and Italic (Romance) FIGREF3 BIBREF8.
<<</Indo-European languages and Kurgan Hypothesis>>>
<<<Brittonic languages>>>
Brittonic or British Celtic languages derive from the Common Brittonic language, spoken throughout Great Britain south of the Firth of Forth during the Iron Age and Roman period. They are classified as Indo-European Celtic languages BIBREF10. The family tree of Brittonic languages is showed in Table TABREF6. Common Brittonic is ancestral to Western and Southwestern Brittonic. Consequently, Cumbric and Welsh, which is spoken in Wales, derived from Western Brittonic. Cornish and Breton, spoken in Cornwall and Brittany, respectively, originated from Southwestern side.
Today Welsh, Cornish and Breton are still in use. However, it is worth to point out that Cornish is a language revived by second-language learners due to the last native speakers dying in the late 18th century. Some people claimed that the Cornish language is an important part of their identity, culture and heritage, and a revival began in the early 20th century. Cornish is currently a recognised minority language under the European Charter for Regional or Minority Languages.
<<</Brittonic languages>>>
<<<Sheep Counting System>>>
Brittonic Celtic language is an ancestor to the number names used for sheep counting BIBREF11 BIBREF12. Until the Industrial Revolution, the use of traditional number systems was common among shepherds, especially in the fells of the Lake District. The sheep-counting system was referred to as Yan Tan Tethera. It was spread across Northern England and in other parts of Britain in earlier times. The number names varied according to dialect, geography, and other factors. They also preserved interesting indications of how languages evolved over time.
The word “yan” or “yen” meaning “one”, in some northern English dialects represents a regular development in Northern English BIBREF13. During the development the Old English long vowel // <ā> was broken into /ie/, /ia/ and so on. This explains the shift to “yan” and “ane” from the Old English ān, which is itself derived from the Proto-Germanic “*ainaz” BIBREF14.
In addition, the counting system demonstrates a clear connection with counting on the fingers. Particularly after numbers reach 10, as the best known examples are formed according to this structure: 1 and 10, 2 and 10, up to 15, and then 1 and 15, 2 and 15, up to 20. The count variability would end at 20. It might be due to the fact, that the shepherds, on reaching 20, would transfer a pebble or marble from one pocket to another, so as to keep a tally of the number of scores.
<<</Sheep Counting System>>>
<<</Human languages>>>
<<</Background>>>
<<<Aims and Objectives>>>
<<<Overall Aim>>>
The aim of this research was to develop computational methods to compare human languages based on the phonetic form of single words (i.e. not exploiting grammar). This comparison of word similarity aims to facilitate the grouping of languages, the identification of the the presumed underlying linguistic evolutionary principles and the analysis of the formation of genealogical relationship between languages.
<<</Overall Aim>>>
<<<Specific Objectives>>>
Devise a way to encode the phonetic representation of words, using:
an in-house encoding,
an IPA (International Phonetic Alphabet).
Develop methods to analyze the comparative relationships between languages using: descriptive and inferential statistics, clustering, visualisation of the data, and analysis of the results.
Implement a repeatable process for running the analysis methods with new data.
Analyse the correlation between geographical distance and language similarity (linguistic distance), and investigate if it explains the evolutionary distance.
Examine which words exhibit more or less variation and the likely causes of it.
Explore which words are preserved better across the same language group and possible reasons behind it.
Explore which language group preserves particular words more in comparison to others and potential reasons behind it.
Determine if certain language groups are correct and exploit the possibility of forming new ones.
<<</Specific Objectives>>>
<<</Aims and Objectives>>>
<<<Data>>>
<<<Language files>>>
Language file or database is a set of languages, each of which is associated with an ordered list of words. All lists of words for a particular data set have the same length. For example:
numbers(romani,[iek,dui,trin,shtar,panj,shov,efta,oksto,ena,desh]).
numbers(english,[wun,too,three,foor,five,siks,seven,eit,nine,ten]).
numbers(french,[un,de,troi,katre,sink,sis,set,wuit,neuf,dis]).
Words and languages are encoded in this format for later use of Prolog. In Prolog each “numbers” line is a fact, which has 2 arguments; the first is the language name and the second is a list (indicated in between square brackets) of words. Words can be written down in their original form or encoded phonetically (as shown in the example). Where synonyms for a word are known, then the word itself is represented by a list of the synonym words. In the example below, Lithuanian, Russian and Italian have two words for the English `blue':
words(english,[black,white,red,yellow,blue,green]).
words(lithuanian,[juoda,balta,raudona,geltona,[melyna,zhydra],zhalia]).
words(russian,[chornyj,belyj,krasnyj,zholtyj,[sinij,goluboj],zeljonyj]).
words(italian,[nero,bianco,rosso,giallo,[blu,azzurro],verde]).
The main focus of this research was exploring words phonetically. Consequently, special encoding was used. It consisted of encoding phonemes by using only one letter; incorporating capital letters for encoding different sounds (See table TABREF21).
Table TABREF22 summarises the language files that are obtained at the moment.
<<</Language files>>>
<<<Sheep>>>
<<<Sheep counting words>>>
Sheep counting numbers were extracted from “Yan Tan Tethera” BIBREF12 page on Wikipedia and placed in a Prolog database. Furthermore, data was encoded phonetically using the set of rules provided by Prof. David Gilbert.
In the given source, number sets ranged from 1-3 to 1-20 for different dialects. The initial step was to reduce the size of the data to sets of numbers 1-10. This way aiming:
to have Prolog syntax without errors (avoided “-”, “ ” as they were common symbols after numbers reached 10);
to avoid the effects of different methods of forming and writing down numbers higher than 10. (Usually they were formed from numbers 1-10 and a base. However, they were written in a different order, making the comparison inefficient.)
In addition, the Wharfedale dialect was removed since only numbers 1-3 were provided; the Weardale dialect was eliminated as it had a counting system with base 5. Consequently, the final version of sheep counting numbers database consisted of 23 observations (dialects) with numbers 1-10.
<<</Sheep counting words>>>
<<<Geographical data>>>
In order to enable the analysis of linguistic and geographical distance relationship, a geographical distance database was created. It was done by firstly creating a personalized Google Map with 23 pins, noting the places of different dialects (they were located approximately in the middle of the area) (Figure: FIGREF28). Subsequently, pairwise distances were calculated between all of them (taking walking distance) and added to the database for further use.
<<</Geographical data>>>
<<<Analysis of average and subset linguistic distance>>>
After applying functions “mean_SD” (Figure: FIGREF72) and “densityP” (Figure: FIGREF73) to the linguistic distances of every word (numbers 1 to 10) in R, the following observations were made. First of all, the most preserved number across all dialects was “10” with distance mean 0.109 and standard deviation 0.129. Numbers “1”, “2”, “3”, “4” had comparatively small distances, which might be the result of being used more frequently. On the other hand, number “6” showed more dissimilarities between dialects than other numbers. The mean score was 0.567 and standard deviation - 0.234. The product scores of mean and standard deviation helped to evaluate both at the same time. Moreover, density plots showed significant fluctuation and tented to have a few peaks. But in general, conformed with the statistics provided by “mean_SD”.
<<</Analysis of average and subset linguistic distance>>>
<<</Sheep>>>
<<<Colours>>>
Colour words were extracted from “Colour words in many languages” BIBREF15 page on Omniglot, collected from people and dictionaries. In addition, data was encoded phonetically using the set of rules provided by Prof. David Gilbert.
The latest version of the database consisted of 42 different languages, each containing 6 colours: black, white, red, yellow, blue, green. For the purposes of analysis the following groups were created:
All languages - “ColoursAll” (42 languages)
Indo-European languages - “ColoursIE” (39 languages)
Germanic languages - “ColoursPGermanic” (10 languages)
Romance languages - “ColoursPRomance” (11 languages)
Germanic and Romance languages - “ColoursPG_R” (21 languages)
<<<Mean and Standard Deviation>>>
When examining the data calculated for “ColoursAll” none of the colours showed a clear tendency to be more preserved than others (Figure: FIGREF83). All colours had large distances and comparatively small standard deviation when compared with other groups. Small standard deviation was most likely the result of most of the distances being large.
Indo-European language group scores were similar to “ColoursAll”, exhibiting slightly larger standard deviation (Figure: FIGREF84). Conclusion could be drawn that words for color “Red” are more similar in this group. The mean score of linguistic distances was 0.61, and SD was equal to 0.178, when average mean was 0.642 and SD 0.212. However, no colour stood out distinctly.
Germanic and Romance language groups revealed more significant results. Germanic languages preserved the colour “Green” considerably (Figure: FIGREF85). The mean and SD was 0.168 and 0.129, when on average mean was reaching 0.333 and SD 0.171. In addition, the colour “Blue” had favorable scores as well - mean was 0.209 and SD was 0.106. Furthermore, Romance languages demonstrated slightly higher means and standard deviations, on average reaching 0.45 and 0.256 (Figure: FIGREF86). Similarly to Germanic, the most preserved colour word in Romance languages was “Green” with a mean of 0.296 and SD of 0.214. It was followed by words for “Black” and then for “Blue”, both being quite similar.
<<</Mean and Standard Deviation>>>
<<<Density Plots>>>
Density plots of all languages and Indo-European languages were similar: both having multiple peaks with the most density around scores of 0.75 (big linguistic distance). Moreover, Germanic languages density distribution consisted of two peaks for words “White”, “Blue” and “Green” (Figure: FIGREF88). This could possibly be the result of certain weighting in the Phonetic Substitution Table or indicate possible further grouping of languages. The color “Black” had more normal distribution and smoother bell shape compared to others. Furthermore, Romance languages also obtained density plots with two peaks for words “White”, “Yellow”, “Blue” (Figure: FIGREF89). In contrast, “Black”, “Red” and “Green” distributions were quite smooth.
In order to experiment how the Phonetic Substitution Table affects the linguistic distances, “densityP” function was applied to the linguistic distances calculated with “GabyTable” substitution table. The aim was to eliminate the two peaks in the Germanic language group for word “Green”. In Germanic languages word for green tended to begin with either “gr” or “khr” (encoded as “Kr”) - both sounding similar phonetically. However, in the original substitution table, a weight for changing “K”(kh) to “g” (and the other way around) did not exist. Consequently, a new table was implemented with this substitution. This change resulted in notably smaller linguistic distances - the mean for the word “Green” was 0.099. However, it did not solve the occurrence of two peaks. The density of “Green” again had two main peaks, but differently distributed compared to the previous case.
<<</Density Plots>>>
<<<Bhattacharya Coefficients>>>
Bhattacharya coefficients were calculated within each group for different pairs of colours. This helped to evaluate which colours were closer in distribution. In addition, hierarchical clustering was done with Bhattacharya coefficients (find the dendrograms in the Appendix SECREF123). However, the potential meaning behind the results was not fully examined.
Another potential use of Bhattacharya coefficients is their application to the same word from a different language group. As a result, the preservation of particular words can be analysed across language groups, enabling to compare and evaluate potential reasons behind it.
<<</Bhattacharya Coefficients>>>
<<</Colours>>>
<<<IPA>>>
“Automatic Phonemic Transcriber” BIBREF16 was used to create 3 IPA encoded databases:
“BasicWords” - words in their original form were taken from Prof. David Gilbert's database for basic words (including: sun, moon, rain, water, fire, man, woman, mother, father, child, yes, no, blood).
“Numbers” - numbers from 1-10 in their original form were taken from Prof. David Gilbert's small database of numbers.
“Colours” - words were taken from the above mentioned database (including words: black, white, red, yellow, blue, green).
Each of the above mentioned databases consisted of 3 languages: English, Danish and German (these were the languages the Automatic Phonemic Transcriber provided) all encoded in IPA.
As the research progressed, the difficulty of obtaining IPA encoding for different languages was faced. This study could not find a cross-linguistic IPA dictionary that included more than 3 languages. Consequently, the question of its existence was raised.
<<</IPA>>>
<<</Data>>>
<<<Methodology>>>
There are two main processes to be carried out.
The first process (Figure: FIGREF43) aims to analyse a databases of words; explore which words exhibit more or less variation, which words are more preserved; examine how languages could be grouped based on linguistic distances of words.
It begins with the calculation of pairwise linguistic distances for the given database of words. A Phonetic Substitution Table is used to assign weights during the calculation and could possibly be modified. The result is a new distance table which is analysed in the following ways:
Performing “densityP” function. The outcome is density plots for every word of a database.
Performing Hierarchical clustering. After, the “Best cut” is determined, which is either the best Silhouette value after calculation of all possible cases, or a forced number K which is a number of words per language in the language file
Calculating Bhattacharya coefficients.
Performing “mean_SD” function.
The second process (Figure: FIGREF44) targets to investigate the relationship between two sets of distance data. In this research, it was applied to analyse the relationship between linguistic and geographical distances.
It starts with producing two pairwise distance tables: one of them is calculated geographical distances, another one is calculated linguistic distances. Then the data from both tables is combined into a data frame for regression analysis in R. The outcome is an object of the class “lm” (result of R function “lm” being used), that is used for data analysis, and a scatter plot with a regression line for visual analysis.
Both processes have been automated, see Section SECREF66.
<<</Methodology>>>
<<<Methods>>>
<<<Edit Distance>>>
For the purposes of this research Edit distance (a measure in computer science and computational linguistics for determining the similarity between 2 strings) was calculated based on Levenshtein distance metric. This metric between two strings is the minimum number of single-character edits, operations including: insertions, deletions or substitutions.
The Levenshtein distance between two strings a,b (of length $\mid a\mid $ and $\mid b\mid $ respectively) is given by $lev_{a,b}(\mid a \mid , \mid b \mid )$ where
where $1_{(a_{i}\ne b_{j})}$ is the indicator function equal to 0 when $a_{i}=b_{j}$ and equal to 1 otherwise. A normalised edit distance between two strings can be computed by
Edit distance was implemented by Prof. David Gilbert using dynamic programming in SWI Prolog BIBREF17. The program was used to compare two words with the same meaning from different languages. When pairwise comparing two words where either one or both comprise synonyms, all the alternatives for each word one one language are compared with the corresponding (set) of words in the other language, and the closest match is selected. In addition, all to all comparisons were made, i.e. edit distance was calculated for words having different meaning as well. Finally, the edit distance for two languages represented by two lists of equal length of corresponding words was computed by taking the average of the edit distance for each (corresponding) pair of words.
An example of pairwise alignments is for the pair of words overa-hofa, where 3 alignments are produced with the use of gap penalty $=1$ and substitution penalties $f \leftrightarrow v = 0.2$, $e \leftrightarrow o = 0.2$ and all other mismatches 1:
[[-,h],[o,o],[v,f],[e,-],[r,-],[a,a]]
[[o,-],[v,h],[e,o],[r,f],[a,a]]
[[o,h],[v,-],[e,o],[r,f],[a,a]]
each with the raw edit distance of 3.2, and the normalised edit distance of
For the sake of clarity we can write the first alignment for example as
where only 3 letters are directly aligned.
<<</Edit Distance>>>
<<<Phonetic Substitution Table>>>
In order to give a specified weight for different operations (insertion, deletion and substitution) Phonetic Substitution Table was created by incorporating Grimm's law BIBREF18 and extending it in-house.
Grimm's Law, principle of relationships in Indo-European languages, describes a process of the regular shifting of consonants in groups. It consist of 3 phases in terms of chain shift BIBREF19.
Proto-Indo-European voiceless stops change into voiceless fricatives.
Proto-Indo-European voiced stops become voiceless stops.
Proto-Indo-European voiced aspirated stops become voiced stops or fricatives.
This is an abstract representation of the chain shift:
$bh > b > p > $
$dh > d > t > $
$gh > g > k > x$
$gwh > gw > kw > xw$
Figure FIGREF54 illustrates how further consonant shifting following Grimm's law affected words from different languages BIBREF20.
Phonetic substitution table was extended in-house by adding more shifts. In addition, it was also written in the way to work with the special encoding described in SECREF20 section. Find the full table “editable” in Appendix SECREF11.
Another phonetic substitution table, called “editableGaby”, was made (See Appendix SECREF11). It was extended by adding pairs like “dzh” and “zh”; “dzh” and “ch”; “kh” and “g”; as well as “H”(sound of e.g. spannish/portuguese “j”) with “kh”, “g”, “k”, “h”. In addition, some of the weights were changed for certain pairs for experimental purposes.
<<</Phonetic Substitution Table>>>
<<<Hierarchical Clustering>>>
<<<Using the OC program>>>
The OC program BIBREF21 is general purpose hierarchical cluster analysis program. It outputs a list of the clusters and optionally draws a dendrogram in PostScript. It requires complete upper diagonal distance or similarity matrix as an input.
<<</Using the OC program>>>
<<<Using R>>>
Hierarchical clustering in R was performed by incorporating clustering together with Silhouette value calculation and cut performance.
In order to fulfill agglomerative hierarchical clustering more efficiently, we created a set of functions in R:
“sMatrix” - Makes a symmetric matrix from a specified column. The function takes a specifically formatted data frame as an input and returns a new data frame. Having a symmetric matrix is necessary for “silhouetteV” and “hcutVisual” functions.
“silhouetteV” - Calculates Silhouette values with “k” value varying from 2 to n-1 (n being the number of different languages/number of rows/number of columns in a data frame). The function takes a symmetric distance matrix as an input and returns a new data frame containing all Silhouette values.
“hcutVisual” - Performs hierarchical clustering and makes a cut with the given K value. Makes Silhouette plot, Cluster plot and dendrogram. Returns a “hcut” object from which cluster assignment, silhouette information, etc. can be extracted.
It is important to note that K-Means clustering was not performed as the algorithm is meant to operate over a data matrix, not a distance matrix.
<<</Using R>>>
<<</Hierarchical Clustering>>>
<<<Further analysis with R>>>
Another set of functions was created to analyse collected data further. They target to ease the comparison of the mean, standard deviation, Bhattacharya coefficient within the words or language groups. Including:
“mean_SD” - Calculates mean, standard deviation, product of the mean and the SD multiplication for every column of the input. Visualises all three values for each column and places it in one plot, which is returned.
“densityP” - Makes a density plot for every column of the input and puts it in one plot, which is returned.
“tscore” - Calculates t-score for every value in the given data frame. (T-score is a standard score Z shifted and scaled to have a mean of 50 and a standard deviation of 10)
“bhatt” - Calculates Bhattacharya coefficient (the probability of the two distributions being the same) for every pair of columns in the data frame. The function returns a new data frame.
<<</Further analysis with R>>>
<<<Process automation>>>
In order to optimise and perform analysis in the most time-efficient manner processes of comparing languages were automated. It was done by creating two shell scripts and an R script for each of them.
The first shell script named “oc2r_hist.sh” was made to perform hierarchical clustering with the best silhouette value cut. This script takes a language database as an input and performs pairwise distance calculation. It then calls “hClustering.R” R script, which reads in the produced OC file, performs hierarchical clustering and calculates all possible silhouette values. Finally, it makes a cut with the number of clusters, which provides the highest silhouette value. To enable this process the R script was written by incorporating the functions described in section SECREF57. The outcome of this program is a table of clusters, a dendrogram, clusters' and silhouette plots.
The second shell script called “wordset_make_analyse.sh” was made to perform calculations of mean, standard deviation, Bhattacharya scores and produce density plots. This script takes a language database as an input and performs pairwise distance calculations for each word of the database. It then calls “rAnalysis.R” R script, which reads in the produced OC file and performs further calculations. Firstly, it calculates mean, standard deviation and the product of both of each word and outputs a histogram and a table of scores. Secondly, it produces density plots of each word. Finally, it converts scores into T-Scores and calculates Bhattacharya coefficient for every possible pair of words. It then outputs a table of scores. To enable this process the R script was written by incorporating the functions described in section SECREF61.
Finally, both of the scripts were combined to minimise user participation.
<<</Process automation>>>
<<</Methods>>>
<<<Results>>>
<<<Hierarchical clustering>>>
Hierarchical clustering was performed with the best Silhouette value cut (Figure FIGREF76). The Silhouette value suggested making 9 clusters. In this grouping, the most interesting observation was that Welsh, Breton and Cornish languages were placed together. It conforms with the fact that all 3 languages descended directly from the Common Brittonic language spoken throughout Britain before the English language became dominant.
<<<All to all comparison analysis>>>
To enable analysis of clusters of all to all comparison, hierarchical clustering was performed. This was done by two different approaches: calculating a silhouette value and choosing the number of clusters accordingly; forcing a function to make 10 clusters due to having numbers from 1 to 10 in the sheep counting database.
By using function “silhouetteV” silhouette values were calculated for all possible $k$ values. The returned data frame indicated the best number of clusters being 70 (see Appendix SECREF120 for dendrogram and cluster plot). The suggested clusters were not distinguished with very high clarity in terms of numbers 1-10 perfectly, but they were comparatively good. A pattern that numbers, which had lower mean and standard deviation scores, would result in purer clusters was noticed. Clusters of numbers “1”, “2”, “3”, “4”, “5” and “10” were not as mixed as “6”, “7”, “8”, “9”.
Another way of looking at all to all comparison data was by producing 10 clusters. It was done by using “hcutVisual” and “cPurity” function (see Appendix SECREF120 cluster plot). The results showed high impurities of clusters (Figure: FIGREF78). Two out of ten clusters were pure, both containing number “5”. Another relatively pure cluster was composed of number “10” and two entries of number “2”. The rest consisted of up to 7 different numbers. This shows that sheep counting numbers in different dialects are too different to form 10 clusters containing each number. However, considering the possibility that dialects were grouped and clustering was performed to the smaller groups, they would have reasonably pure clusters. Exploring this grouping options could be a subject for further work.
<<</All to all comparison analysis>>>
<<<Linguistic and Geographical distance relationship>>>
In order to investigate the correlation between linguistic and geographical distance, “lm” function was performed and a scatter plot was created. The regression line in the scatter plot suggested that the relationship existed. However, the R-squared value, extracted from the “lm” object, was equal to 0.131. This indicated that relationship existed, but was not significant.
One assumption made was that Cornish, Breton and Welsh dialects might have had a weakening effect on the relationship, since they had large linguistic distances compared to other dialects. However this assumption could not be validated as the correlation was less significant after eliminating them. This highlights that although these dialects had large linguistic distance scores, they also had big geographical distances that do not contradict the relationship.
In addition, comparison was done between linguistic distance and
$Log_{10}(\text{GeographicalDistance})$. This resulted in an even weaker relationship with R-squared being 0.097.
<<</Linguistic and Geographical distance relationship>>>
<<</Hierarchical clustering>>>
<<<Small Numbers>>>
<<<All to all comparison>>>
Analysis was carried out in two ways. First of all, hierarchical clustering was performed with the best silhouette value cut. For this data set best silhouette value was 0.48 and it suggested making 329 clusters. Clusters did not exhibit high purity. However, the ones that did quite clearly corresponded to unique subgroups of language families.
Another way of looking at all to all comparison data was by producing 10 clusters. The anticipated outcome was members being distinguished by numbers, forming 10 clean clusters. However, all the clusters were very impure and consisting multiple different numbers. This might be due to different languages having phonetically similar words for different words, in this case.
All to all pairwise comparison could be an advantageous tool when used for language family branches or smaller, but related subsets. It could validate if languages belong to a certain group.
<<</All to all comparison>>>
<<</Small Numbers>>>
<<</Results>>>
<<<Conclusions>>>
This project has aimed to develop computational methods to analyse and understand connections between human languages.
The project included collecting words from different languages in order to form new databases, forming rules for phonetic encoding of words and adjusting phonetic substitution table. Several computational methods of calculating pairwise distance between two words were taken, including average, subset and all to all words distance calculation. It was done by incorporating edit distance and phonetic substitution table, and implementing it in SWI Prolog. This was followed by detailed analysis of distance scores, which was conducted by the specific automated routines and developed R functions. They enabled performing hierarchical clustering with a cut either according to silhouette value, or to specified K value. They provided summary of mean, standard deviation and other statistics, like Bhattacharya scores. All these techniques delivered a thorough analysis of data and the automation of processes ensured they were used efficiently.
The resulting outcome of analysis of old sheep counting systems in different English dialects was the observation that numbers “1”,“2”,“3”,“4” and “10” were more uniform within different dialects than others, posing that they might have been the most frequently used ones. Analysis of all to all comparison did not provide pure clusters and shows that sheep counting numbers in different dialects are too different to form 10 clusters containing each number. This suggests that dialects should be grouped into subsets. Furthermore, hierarchical clustering with the best silhouette cut suggested the potential 9 groups, which consist members with the most similar counting words. Surprisingly, it was not entirely based on location. This corresponded with the difficulty of finding relationship between geographic and linguistic distance, the conducted tests showed it was insignificant.
Analysis of colour words revealed that within Indo-European languages words for colour red were moderately more preserved. Both Germanic and Romance language groups tended to have considerably more uniform words for green and blue colours. In addition, Romance language group preserved colour black reasonably well. Analysis of linguistic distances distribution showed multiple peaks within words for various language groups, suggesting that further language grouping could be done. Furthermore, the resulting outcome of hierarchical clustering with silhouette cut was known and officially accepted language families. Most of the clusters were subgroups of existing language families. Some of them suggested different sub-grouping according to colour words (e.g. Lithuanian was appointed to Slavic languages, while Latvian formed cluster on its own).
IPA databases resulted in the same relationships between languages as non-IPA phonetically encoded databases. However, to fully explore the potential of IPA-encoded databases they ought to be expanded and a customized weights table should be created.
In conclusion, this project resulted in creation of several felicitous computational techniques to explore many languages and their correlation all at once.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Methodology, Abstract"
],
"type": "disordered_section"
}
|
1912.06602
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
That and There: Judging the Intent of Pointing Actions with Robotic Arms
<<<Abstract>>>
Collaborative robotics requires effective communication between a robot and a human partner. This work proposes a set of interpretive principles for how a robotic arm can use pointing actions to communicate task information to people by extending existing models from the related literature. These principles are evaluated through studies where English-speaking human subjects view animations of simulated robots instructing pick-and-place tasks. The evaluation distinguishes two classes of pointing actions that arise in pick-and-place tasks: referential pointing (identifying objects) and locating pointing (identifying locations). The study indicates that human subjects show greater flexibility in interpreting the intent of referential pointing compared to locating pointing, which needs to be more deliberate. The results also demonstrate the effects of variation in the environment and task context on the interpretation of pointing. Our corpus, experiments and design principles advance models of context, common sense reasoning and communication in embodied communication.
<<</Abstract>>>
<<<Introduction>>>
Recent years have seen a rapid increase of robotic deployment, beyond traditional applications in cordoned-off workcells in factories, into new, more collaborative use-cases. For example, social robotics and service robotics have targeted scenarios like rehabilitation, where a robot operates in close proximity to a human. While industrial applications envision full autonomy, these collaborative scenarios involve interaction between robots and humans and require effective communication. For instance, a robot that is not able to reach an object may ask for a pick-and-place to be executed in the context of collaborative assembly. Or, in the context of a robotic assistant, a robot may ask for confirmation of a pick-and-place requested by a person.
When the robot's form permits, researchers can design such interactions using principles informed by research on embodied face-to-face human–human communication. In particular, by realizing pointing gestures, an articulated robotic arm with a directional end-effector can exploit a fundamental ingredient of human communication BIBREF0. This has motivated roboticists to study simple pointing gestures that identify objects BIBREF1, BIBREF2, BIBREF3. This paper develops an empirically-grounded approach to robotic pointing that extends the range of physical settings, task contexts and communicative goals of robotic gestures. This is a step towards the richer and diverse interpretations that human pointing exhibits BIBREF4.
This work has two key contributions. First, we create a systematic dataset, involving over 7000 human judgments, where crowd workers describe their interpretation of animations of simulated robots instructing pick-and-place tasks. Planned comparisons allow us to compare pointing actions that identify objects (referential pointing) with those that identify locations (locating pointing). They also allow us to quantify the effect of accompanying speech, task constraints and scene complexity, as well as variation in the spatial content of the scene. This new resource documents important differences in the way pointing is interpreted in different cases. For example, referential pointing is typically robust to the exactness of the pointing gesture, whereas locating pointing is much more sensitive and requires more deliberate pointing to ensure a correct interpretation. The Experiment Design section explains the overall process of data collection, the power analysis for the preregistered protocol, and the content presented to subjects across conditions.
The second contribution is a set of interpretive principles, inspired by the literature on vague communication, that summarize the findings about robot pointing. They suggest that pointing selects from a set of candidate interpretations determined by the type of information specified, the possibilities presented by the scene, and the options compatible with the current task. In particular, we propose that pointing picks out all candidates that are not significantly further from the pointing ray than the closest alternatives. Based on our empirical results, we present design principles that formalize the relevant notions of “available alternatives” and “significantly further away”, which can be used in future pointing robots. The Analysis and Design Principles sections explain and justify this approach.
<<</Introduction>>>
<<<Related work>>>
This paper focuses on the fundamental AI challenge of effective embodied communication, by proposing empirically determined generative rules for robotic pointing, including not only referential pointing but also pointing that is location-oriented in nature. Prior research has recognized the importance of effective communication by embracing the diverse modalities that AI agents can use to express information. In particular, perceiving physical actions BIBREF5 is often essential for socially-embedded behavior BIBREF6, as well as for understanding human demonstrations and inferring solutions that can be emulated by robots BIBREF7. Animated agents have long provided resources for AI researchers to experiment with models of conversational interaction including gesture BIBREF8, while communication using hand gestures BIBREF9 has played a role in supporting intelligent human-computer interaction.
Enabling robots to understand and generate instructions to collaboratively carry out tasks with humans is an active area of research in natural language processing and human-robot interaction BIBREF10, BIBREF11. Since robotic hardware capabilities have increased, robots are increasingly seen as a viable platform for expressing and studying behavioral models BIBREF12. In the context of human-robot interaction, deictic or pointing gestures have been used as a form of communication BIBREF13. More recent work has developed richer abilities for referring to objects by using pre-recorded, human-guided motions BIBREF14, or using mixed-reality, multi-modal setups BIBREF15.
Particular efforts in robotics have looked at making pointing gestures legible, adapting the process of motion planning so that robot movements are correctly understood as being directed toward the location of a particular object in space BIBREF2, BIBREF3. The current work uses gestures, including pointing gestures and demonstrations, that are legible in this sense. It goes on to explore how precise the targeting has to be to signal an intended interpretation.
In natural language processing research, it's common to use an expanded pointing cone to describe the possible target objects for a pointing gesture, based on findings about human pointing BIBREF16, BIBREF17. Pointing cone models have also been used to model referential pointing in human–robot interaction BIBREF18, BIBREF19. In cluttered scenes, the pointing cone typically includes a region with many candidate referents. Understanding and generating object references in these situations involves combining pointing with natural language descriptions BIBREF1, BIBREF20. While we also find that many pointing gestures are ambiguous and can benefit from linguistic supplementation, our results challenge the assumption of a uniform pointing cone. We argue for an alternative, context-sensitive model.
In addition to gestures that identify objects, we also look at pointing gestures that identify points in space. The closest related work involves navigation tasks, where pointing can be used to discriminate direction (e.g., left vs right) BIBREF21, BIBREF22. The spatial information needed for pick-and-place tasks is substantially more precise. Our findings suggest that this precision significantly impacts how pointing is interpreted and how it should be modeled.
<<</Related work>>>
<<<Communicating Pick-and-Place>>>
This section provides a formalization of pick-and-place tasks and identifies information required to specify them.
Manipulator: Robots that can physically interact with their surroundings are called manipulators, of which robotic arms are the prime example.
Workspace: The manipulator operates in a 3D workspace $\mathcal {W} \subseteq \mathbb {R}^3$. The workspace also contains a stable surface of interest defined by a plane $S\subset \mathcal {W}$ along with various objects. To represent 3D coordinates of workspace positions, we use $x\in \mathcal {W}$.
End-effector: The tool-tips or end-effectors are geometries, often attached at the end of a robotic arm, that can interact with objects in the environment. These form a manipulator's chief mode of picking and placing objects of interest and range from articulated fingers to suction cups. A subset of the workspace that the robot can reach with its end-effector is called the reachable workspace. The end-effector in this work is used as a pointing indicator.
Pick-and-place: Given a target object in the workspace, a pick-and-place task requires the object to be picked up from its initial position and orientation, and placed at a final position and orientation. When a manipulator executes this task in its reachable workspace, it uses its end-effector. The rest of this work ignores the effect of the object's orientation by considering objects with sufficient symmetry. Given this simplification, the pick-and-place task can be viewed as a transition from an initial position $x_{\textit {init}}\in \mathcal {W}$ to a final placement position $x_{\textit {final}}\in \mathcal {W}$. Thus, a pick-and-place task can be specified with a tuple
Pointing Action: Within its reachable workspace the end-effector of the manipulator can attain different orientations to fully specify a reachable pose $p$, which describes its position and orientation. The robots we study have a directional tooltip that viewers naturally see as projecting a ray $r$ along its axis outward into the scene. In understanding pointing as communication, the key question is the relationship between the ray $r$ and the spatial values $x_{\textit {init}}$ and $x_{\textit {final}}$ that define the pick-and-place task.
To make this concrete, we distinguish between the target of pointing and the intent of pointing. Given the ray $r$ coming out of the end-effector geometry, we define the target of the pointing as the intersection of this ray on the stable surface,
Meanwhile, the intent of pointing specifies one component of a pick-and-place task. There are two cases:
Referential Pointing: The pointing action is intended to identify a target object $o$ to be picked up. This object is the referent of such an action. We can find $x_{\textit {init}}$, based on the present position of $o$.
Locating Pointing: The pointing action is intended to identify the location in the workspace where the object needs to be placed, i.e, $x_{\textit {final}}$.
We study effective ways to express intent for a pick-and-place task. In other words, what is the relationship between a pointing ray $r$ and the location $x_{\textit {init}}$ or $x_{\textit {final}}$ that it is intended to identify? To assess these relationships, we ask human observers to view animations expressing pick-and-place tasks and classify their interpretations. To understand the factors involved, we investigate a range of experimental conditions.
<<</Communicating Pick-and-Place>>>
<<<Experiments>>>
Our experiments share a common animation platform, described in the Experimental Setup, and a common Data Collection protocol. The experiments differ in presenting subjects with a range of experimental conditions, as described in the corresponding section. All of the experiments described here together with the methods chosen to analyze the data were based on a private but approved pre-registration on aspredicted.org. The document is publicly available at: https://aspredicted.org/cg753.pdf.
<<<Experiment Setup>>>
Each animation shows a simulated robot producing two pointing gestures to specify a pick-and-place task. Following the animation, viewers are asked whether a specific image represents a possible result of the specified task.
Robotic Platforms The experiments were performed on two different robotic geometries, based on a Rethink Baxter, and a Kuka IIWA14. The Baxter is a dual-arm manipulator with two arms mounted on either side of a static torso. The experiments only move the right arm of the Baxter. The Kuka consists of a single arm that is vertically mounted, i.e., points upward at the base. In the experiments the robots are shown with a singly fingered tool-tip, where the pointing ray is modeled as the direction of this tool-tip.
Note The real Baxter robot possesses a heads-up display that can be likened to a `head'. This has been removed in the simulations that were used in this study (as shown for example in Figure FIGREF7).
Workspace Setup Objects are placed in front of the manipulators. In certain trials a table is placed in front of the robot as well, and the objects rest in stable configurations on top of the table. A pick-and-place task is provided specified in terms of the positions of one of the objects.
Objects The objects used in the study include small household items like mugs, saucers and boxes (cuboids), that are all placed in front of the robots.
Motion Generation The end-effector of the manipulator is instructed to move to pre-specified waypoints, designed for the possibility of effective communication, that typically lie between the base of the manipulator and the object itself. Such waypoints fully specify both the position and orientation of the end-effector to satisfy pointing actions. The motions are performed by solving Inverse Kinematics for the end-effector geometry and moving the manipulator along these waypoints using a robotic motion planning library BIBREF23. The motions were replayed on the model of the robot, and rendered in Blender.
Pointing Action Generation Potential pointing targets are placed using a cone $C(r, \theta )$, where $r$ represents the pointing ray and $\theta $ represents the vertex angle of the cone. As illustrated in Fig FIGREF2, the cone allows us to assess the possible divergence between the pointing ray and the actual location of potential target objects on the rest surface $S$.
Given a pointing ray $r$, we assess the resolution of the pointing gesture by sampling $N$ object poses $p_i, i=1:N$ in $P=C(r, \theta ) \cap S$—the intersection of the pointing cone with the rest surface. While $p_i$ is the 6d pose for the object with translation $t \in R^3$ and orientation $R \in SO(3)$ only 2 degrees-of-freedom $(x, y)$ corresponding to $t$ are varied in the experiments. By fixing the $z$ coordinate for translation and restricting the z-axis of rotation to be perpendicular to $S$, it is ensured that the object rests in a physically stable configuration on the table.
The $N$ object poses are sampled by fitting an ellipse within $P$ and dividing the ellipse into 4 quadrants $q_1\ldots q_4$ (See Figure FIGREF2 (C)). Within each quadrant $q_i$ the $N/4$ $(x,y)$ positions are sampled uniformly at random. For certain experiments additional samples are generated with an objective to increase coverage of samples within the ellipse by utilizing a dispersion measure.
Speech Some experiments also included verbal cues with phrases like `Put that there' along with the pointing actions. It was very important for the pointing actions and these verbal cues to be in synchronization. To fulfill this we generate the voice using Amazon Polly with text written in SSML format and make sure that peak of the gesture (the moment a gesture comes to a stop) is in alignment with the peak of each audio phrase in the accompanying speech. During the generation of the video itself we took note of the peak moments of the gestures and then manipulated the duration between peaks of the audio using SSML to match them with gesture peaks after analyzing the audio with the open-source tool PRAAT (www.praat.org).
<<</Experiment Setup>>>
<<<Data Collection>>>
Data collection was performed in Amazon Mechanical Turk. All subjects agreed to a consent form and were compensated at an estimated rate of USD 20 an hour. The subject-pool was restricted to non-colorblind US citizens. Subjects are presented a rendered video of the simulation where the robot performs one referential pointing action, and one locating pointing action which amounts to it pointing to an object, and then to a final location. During these executions synchronized speech is included in some of the trials to provide verbal cues.
Then on the same page, subjects see the image that shows the result of the pointing action. They are asked whether the result is (a) correct, (b) incorrect, or (c) ambiguous.
To test our hypothesis, we studied the interpretation of the two pointing behaviors in different contexts. Assuming our conjecture and a significance level of 0.05, a sample of 28 people in each condition is enough to detect our effect with a 95% power. Participants are asked to report judgments on the interpretation of the pointing action in each class. Each participant undertakes two trials from each class. The range of different cases are described below. Overall, the data collection in this study involved over 7,290 responses to robot pointing actions.
<<</Data Collection>>>
<<<Experimental Conditions>>>
We used our experiment setup to generate videos and images from the simulation for a range of different conditions.
<<<Referential vs Locating>>>
In this condition, to reduce the chances of possible ambiguities, we place only one mug is on the table. The Baxter robot points its right arm to the mug and then points to its final position, accompanied by a synchronized verbal cue, “Put that there.”
We keep the motion identical across all the trials in this method. We introduce a variability in the initial position of the mug by sampling 8 random positions within conic sections subtending $45^{\circ } , 67.5^{\circ }, $ and $90^{\circ }$ on the surface of the table. New videos are generated for each such position of the mug. This way we can measure how flexible subjects are to the variation of the initial location of the referent object.
To test the effect for the locating pointing action, we test similarly sampled positions around the final pointed location, and display these realizations of the mug as the result images to subjects, while the initial position of the mug is kept perfectly situated.
A red cube that is in the gesture space of the robot, and is about twice as big as the mug is placed on the other side of the table as a visual guide for the subjects to see how objects can be placed on the table. We remove the tablet that is attached to Baxter's head for our experiments.
Effect of speech In order to test the effect of speech on the disparity between the kinds of pointing actions, a set of experiments were designed under the Referential vs Locating method with and without any speech. All subsequent methods will include verbal cues during their action execution. These cues are audible in the video.
<<</Referential vs Locating>>>
<<<Reverse Task>>>
One set of experiments are run for the pick-and-place task with the initial and final positions of the object flipped during the reverse task. As opposed to the first set of experiments, the robot now begins by pointing to an object in the middle of the table, and then to an area areas towards the table's edge, i.e., the pick and place positions of the object are `reversed'.
The trials are meant to measure the sensitivity of the subjects in pick trials to the direction of the pointing gestures and to the absolute locations that the subjects thought the robot was pointing at.
This condition is designed to be identical to the basic Referential vs Locating study, except for the direction of the action. The motions are still executed on the Baxter's right arm.
<<</Reverse Task>>>
<<<Different Robotic Arm>>>
In order to ensure that the results obtained in this study are not dependent on the choice of the robotic platform or its visual appearance, a second robot—a singly armed industrial Kuka manipulator—is also evaluated in a Referential vs Locating study (shown in Figure FIGREF6).
<<</Different Robotic Arm>>>
<<<Cluttered Scene>>>
To study how the presence of other objects would change the behavior of referential pointing, we examine the interpretation of the pointing actions when there is more than one mug on the table. Given the instructions to the subjects, both objects are candidate targets. This experiment allows the investigation of the effect of a distractor object in the scene on referential pointing.
We start with a setup where there are two mugs placed on the table (similar to the setup in Figure FIGREF14). One is a target mug placed at position $x_{\textit {object}}$ and a distractor mug at position $x_{\textit {distractor}}$. With the robot performing an initial pointing action to a position $x_{\textit {init}}$ on the table. Both the objects are sampled around $x_{\textit {init}}$ along the diametric line of the conic section arising from increasing cone angles of $45^\circ , 67.5^\circ , $ and $90^\circ $, where the separation of $x_{\textit {object}}$, and $x_{\textit {distractor}}$ is equal to the length of the diameter of the conic section, $D$. The objects are then positioned on the diametric line with a random offset between $[-\frac{D}{2}, \frac{D}{2}]$ around $x_{\textit {init}}$ and along the line. This means that the objects are at various distances apart, and depending upon the offset, one of the objects is nearer to the pointing action. The setup induces that the nearer mug serves as the object, and the farther one serves as the distractor. The motions are performed on the Baxter's right arm. The camera perspective in simulation is set to be facing into the pointing direction. The subjects in this trial are shown images of the instant of the referential pointing action.
<<</Cluttered Scene>>>
<<<Natural vs Unnatural scene>>>
In this condition we study how the contextual and physical understanding of the world impacts the interpretation of pointing gestures. We generate a scenario for locating pointing in which the right arm of the Baxter points to a final placement position for the cuboidal object on top of a stack of cuboidal objects but towards the edge which makes it physically unstable. The final configurations of the object (Figure FIGREF17) shown to the users were a) object lying on top of the stack b) object in the unstable configuration towards the edge of the stack and c) object at the bottom of the stack towards one side. New videos are generated for each scenario along with verbal cues.
The pointing action, as well as the objects of interest, stay the identical between the natural, and unnatural trials. The difference lies in other objects in the scene that could defy gravity and float in the unnatural trials. The subjects were given a text-based instruction at the beginning of an unnatural trial saying they were seeing a scene where “gravity does not exist.”
<<</Natural vs Unnatural scene>>>
<<<Different verbs>>>
To test if the effect is specific to the verb put, we designed a control condition where everything remained the same as the Referential vs Locating trials except the verb put which we replaced with place, move and push. Here again we collect 30 data points for each sampled $x^*$.
<<</Different verbs>>>
<<</Experimental Conditions>>>
<<</Experiments>>>
<<<Analysis>>>
<<<Natural vs Unnatural>>>
As shown in Table TABREF21 we observed in the natural scene, when the end-effector points towards the edge of the cube that is on top of the stack, subjects place the new cube on top of the stack or on the table instead of the edge of the cube. However, in the unnatural scene, when we explain to subjects that there is no gravity, a majority agree with the final image that has the cube on the edge. To test if this difference is statistically significant, we use the Fisher exact test BIBREF25. The test statistic value is $0.0478$. The result is significant at $p < 0.05$.
<<</Natural vs Unnatural>>>
<<<Cluttered>>>
The data from these trials show how human subjects select between the two candidate target objects on the table. Since the instructions do not serve to disambiguate the target mug, the collected data show what the observers deemed as the correct target. Figure FIGREF24 visualizes subjects' responses across trials. The location of each pie uses the $x$-axis to show how much closer one candidate object is to the pointing target than the other, and uses the $y$-axis to show the overall imprecision of pointing. Each pie in Figure FIGREF24 shows the fraction of responses across trials that recorded the nearer (green) mug as correct compared to the farther mug (red). The white shaded fractions of the pies show the fraction of responses where subjects found the gesture ambiguous.
As we can see in Figure FIGREF24, once the two objects are roughly equidistant the cups from the center of pointing (within about 10cm), subjects tend to regard the pointing gesture as ambiguous, but as this distance increases, subjects are increasingly likely to prefer the closer target. In all cases, wherever subjects have a preference for one object over the other, they subjects picked the mug that was the nearer target of the pointing action more often than the further one.
<<</Cluttered>>>
<<</Analysis>>>
<<<Human Evaluation of Instructions>>>
After designing and conducting our experiments, we became concerned that subjects might regard imprecise referential pointing as understandable but unnatural. If they did, their judgments might combine ordinary interpretive reasoning with additional effort, self-consciousness or repair. We therefore added a separate evaluation to assess how natural the generated pointing actions and instructions are. We recruited 480 subjects from Mechanical Turk using the same protocol described in our Data Collection procedure, and asked them to rank how natural they regarded the instruction on a scale of 0 to 5.
The examples were randomly sampled from the videos of the referential pointing trials that we showed to subjects for both the Baxter and Kuka robots. These examples were selected in a way that we obtained equal number of samples from each cone. The average rating for samples from the 45, ${67.5}$ and 90 cone are $3.625, 3.521$ and $3.650$ respectively. For Kuka, the average rating for samples from the 45, ${67.5}$ and 90 cone are $3.450, 3.375$, and $3.400$. Overall, the average for Baxter is $3.600$, and for Kuka is $3.408$. The differences between Kuka and Baxter and the differences across cones are not statistically significant ($t \le |1.07|, p > 0.1 $). Thus we have no evidence that subjects regard imprecise pointing as problematic.
<<</Human Evaluation of Instructions>>>
<<<Design Principles>>>
The results of the experiments suggest that locating pointing is interpreted rather precisely, where referential pointing is interpreted relatively flexibly. This naturally aligns with the possibility for alternative interpretations. For spatial reference, any location is a potential target. By contrast, for referential pointing, it suffices to distinguish the target object from its distractors.
We can characterize this interpretive process in formal terms by drawing on observations from the philosophical and computational literature on vagueness BIBREF26, BIBREF27, BIBREF28. Any pointing gesture starts from a set of candidate interpretations $D \subset \mathcal {W}$ determined by the context and the communicative goal. In unconstrained situations, locating pointing allows a full set of candidates $D = \mathcal {W}.$ If factors like common-sense physics impose task constraints, that translates to restrictions on feasible targets $CS$, leading to a more restricted set of candidates $D = CS \cap \mathcal {W}$. Finally, for referential pointing, the potential targets are located at $x_1 \ldots x_N \in S$, and $D = \lbrace x_1 \ldots x_N \rbrace .$
Based on the communicative setting, we know that the pointing gesture, like any vague referring expression, must select at least one of the possible interpretations BIBREF28. We can find the best interpretation by its distance to the target $x^*$ of the pointing gesture. Using $d(x,x^*)$ to denote this distance, gives us a threshold
Vague descriptions can't be sensitive to fine distinctions BIBREF27. So if a referent at $\theta $ is close enough to the pointing target, then another at $\theta + \epsilon $ must be close enough as well, for any value of $\epsilon $ that is not significant in the conversational context. Our results suggest that viewers regard 10cm (in the scale of the model simulation) as an approximate threshold for a significant difference in our experiments.
In all, we predict that a pointing gesture is interpreted as referring to $\lbrace x \in D | d(x,x^*) \le \theta + \epsilon \rbrace .$ We explain the different interpretations through the different choice of $D$.
<<<Locating Pointing>>>
For unconstrained locating pointing, $x^* \in D$, so $\theta =0$. That means, the intended placement cannot differ significantly from the pointing target. Taking into account common sense, we allow for small divergence that connects the pointing, for example, to the closest stable placement.
<<</Locating Pointing>>>
<<<Referential Pointing>>>
For referential pointing, candidates play a much stronger role. A pointing gesture always has the closest object to the pointing target as a possible referent. However, ambiguities arise when the geometries of more than one object intersect with the $\theta +\epsilon $-neighborhood of $x^*$. We can think of that, intuitively, in terms of the effects of $\theta $ and $\epsilon $. Alternative referents give rise to ambiguity not only when they are too close to the target location ($\theta $) but even when they are simply not significantly further away from the target location ($\epsilon $).
<<</Referential Pointing>>>
<<</Design Principles>>>
<<<Conclusion and Future Work>>>
We have presented an empirical study of the interpretation of simulated robots instructing pick-and-place tasks. Our results show that robots can effectively combine pointing gestures and spoken instructions to communicate both object and spatial information. We offer an empirical characterization—the first, to the best of the authors' knowledge—of the use of robot gestures to communicate precise spatial locations for placement purposes. We have suggested that pointing, in line with other vague references, give rise to a set of candidate interpretations that depend on the task, context and communicative goal. Users pick the interpretations that are not significantly further from the pointing ray than the best ones. This contrasts with previous models that required pointing gestures to target a referent exactly or fall within a context-independent pointing cone.
Our work has a number of limitations that suggest avenues for future work. It remains to implement the design principles on robot hardware, explore the algorithmic process for generating imprecise but interpretable gestures, and verify the interpretations of physically co-present viewers. Note that we used a 2D interface, which can introduce artifacts, for example from the effect of perspective. In addition, robots can in general trade off pointing gestures with other descriptive material in offering instructions. Future work is needed to assess how such trade-offs play out in location reference, not just in object reference.
More tight-knit collaborative scenarios need to be explored, including ones where multiple pick-and-place tasks can be composed to communicate more complex challenges and ones where they involve richer human environments. Our study of common sense settings opens up intriguing avenues for such research, since it suggests ways to take into account background knowledge and expectations to narrow down the domain of possible problem specifications in composite tasks like “setting up a dining table.”
While the current work studies the modalities of pointing and verbal cues, effects of including additional robotic communication in the form of heads-up displays or simulated eye-gaze would be other directions to explore. Such extensions would require lab experiments with human subjects and a real robot. This is the natural next step of our work.
<<</Conclusion and Future Work>>>
<<</Title>>>
|
{
"references": [
"Introduction, Human Evaluation of Instructions"
],
"type": "disordered_section"
}
|
2002.01030
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Detecting Fake News with Capsule Neural Networks
<<<Abstract>>>
Fake news is dramatically increased in social media in recent years. This has prompted the need for effective fake news detection algorithms. Capsule neural networks have been successful in computer vision and are receiving attention for use in Natural Language Processing (NLP). This paper aims to use capsule neural networks in the fake news detection task. We use different embedding models for news items of different lengths. Static word embedding is used for short news items, whereas non-static word embeddings that allow incremental up-training and updating in the training phase are used for medium length or large news statements. Moreover, we apply different levels of n-grams for feature extraction. Our proposed architectures are evaluated on two recent well-known datasets in the field, namely ISOT and LIAR. The results show encouraging performance, outperforming the state-of-the-art methods by 7.8% on ISOT and 3.1% on the validation set, and 1% on the test set of the LIAR dataset.
<<</Abstract>>>
<<<Introduction>>>
Flexibility and ease of access to social media have resulted in the use of online channels for news access by a great number of people. For example, nearly two-thirds of American adults have access to news by online channels BIBREF0, BIBREF1. BIBREF2 also reported that social media and news consumption is significantly increased in Great Britain.
In comparison to traditional media, social networks have proved to be more beneficial, especially during a crisis, because of the ability to spread breaking news much faster BIBREF3. All of the news, however, is not real and there is a possibility of changing and manipulating real information by people due to political, economic, or social motivations. This manipulated data leads to the creation of news that may not be completely true or may not be completely false BIBREF4. Therefore, there is misleading information on social media that has the potential to cause many problems in society. Such misinformation, called fake news, has a wide variety of types and formats. Fake advertisements, false political statements, satires, and rumors are examples of fake news BIBREF0. This widespread of fake news that is even more than mainstream media BIBREF5 motivated many researchers and practitioners to focus on presenting effective automatic frameworks for detecting fake news BIBREF6. Google has announced an online service called “Google News Initiative” to fight fake news BIBREF7. This project will try to help readers for realizing fake news and reports BIBREF8.
Detecting fake news is a challenging task. A fake news detection model tries to predict intentionally misleading news based on analyzing the real and fake news that previously reviewed. Therefore, the availability of high-quality and large-size training data is an important issue.
The task of fake news detection can be a simple binary classification or, in a challenging setting, can be a fine-grained classification BIBREF9. After 2017, when fake news datasets were introduced, researchers tried to increase the performance of their models using this data. Kaggle dataset, ISOT dataset, and LIAR dataset are some of the most well-known publicly available datasets BIBREF10.
In this paper, we propose a new model based on capsule neural networks for detecting fake news. We propose architectures for detecting fake news in different lengths of news statements by using different varieties of word embedding and applying different levels of n-gram as feature extractors. We show these proposed models achieve better results in comparison to the state-of-the-art methods.
The rest of the paper is organized as follows: Section SECREF2 reviews related work about fake news detection. Section SECREF3 presents the model proposed in this paper. The datasets used for fake news detection and evaluation metrics are introduced in Section SECREF4. Section SECREF5 reports the experimental results, comparison with the baseline classification and discussion. Section SECREF6 summarizes the paper and concludes this work.
<<</Introduction>>>
<<<Related work>>>
Fake news detection has been studied in several investigations. BIBREF11 presented an overview of deception assessment approaches, including the major classes and the final goals of these approaches. They also investigated the problem using two approaches: (1) linguistic methods, in which the related language patterns were extracted and precisely analyzed from the news content for making decision about it, and (2) network approaches, in which the network parameters such as network queries and message metadata were deployed for decision making about new incoming news.
BIBREF12 proposed an automated fake news detector, called CSI that consists of three modules: Capture, Score, and Integrate, which predicts by taking advantage of three features related to the incoming news: text, response, and source of it. The model includes three modules; the first one extracts the temporal representation of news articles, the second one represents and scores the behavior of the users, and the last module uses the outputs of the first two modules (i.e., the extracted representations of both users and articles) and use them for the classification. Their experiments demonstrated that CSI provides an improvement in terms of accuracy.
BIBREF13 introduced a new approach which tries to decide if a news is fake or not based on the users that interacted with and/or liked it. They proposed two classification methods. The first method deploys a logistic regression model and takes the user interaction into account as the features. The second one is a novel adaptation of the Boolean label crowdsourcing techniques. The experiments showed that both approaches achieved high accuracy and proved that considering the users who interact with the news is an important feature for making a decision about that news.
BIBREF14 introduced two new datasets that are related to seven different domains, and instead of short statements containing fake news information, their datasets contain actual news excerpts. They deployed a linear support vector machine classifier and showed that linguistic features such as lexical, syntactic, and semantic level features are beneficial to distinguish between fake and genuine news. The results showed that the performance of the developed system is comparable to that of humans in this area.
BIBREF15 provided a novel dataset, called LIAR, consisting of 12,836 labeled short statements. The instances in this dataset are chosen from more natural contexts such as Facebook posts, tweets, political debates, etc. They proposed neural network architecture for taking advantage of text and meta-data together. The model consists of a Convolutional Neural Network (CNN) for feature extraction from the text and a Bi-directional Long Short Term Memory (BiLSTM) network for feature extraction from the meta-data and feeds the concatenation of these two features into a fully connected softmax layer for making the final decision about the related news. They showed that the combination of metadata with text leads to significant improvements in terms of accuracy.
BIBREF16 proved that incorporating speaker profiles into an attention-based LSTM model can improve the performance of a fake news detector. They claim speaker profiles can contribute to the model in two different ways. First, including them in the attention model. Second, considering them as additional input data. They used party affiliation, speaker location, title, and credit history as speaker profiles, and they show this metadata can increase the accuracy of the classifier on the LIAR dataset.
BIBREF17 presented a new dataset for fake news detection, called ISOT. This dataset was entirely collected from real-world sources. They used n-gram models and six machine learning techniques for fake news detection on the ISOT dataset. They achieved the best performance by using TF-IDF as the feature extractor and linear support vector machine as the classifier.
BIBREF18 proposed an end-to-end framework called event adversarial neural network, which is able to extract event-invariant multi-modal features. This model has three main components: the multi-modal feature extractor, the fake news detector, and the event discriminator. The first component uses CNN as its core module. For the second component, a fully connected layer with softmax activation is deployed to predict if the news is fake or not. As the last component, two fully connected layers are used, which aims at classifying the news into one of K events based on the first component representations.
BIBREF19 developed a tractable Bayesian algorithm called Detective, which provides a balance between selecting news that directly maximizes the objective value and selecting news that aids toward learning user's flagging accuracy. They claim the primary goal of their works is to minimize the spread of false information and to reduce the number of users who have seen the fake news before it becomes blocked. Their experiments show that Detective is very competitive against the fictitious algorithm OPT, an algorithm that knows the true users’ parameters, and is robust in applying flags even in a setting where the majority of users are adversarial.
<<</Related work>>>
<<<Capsule networks for fake news detection>>>
In this section, we first introduce different variations of word embedding models. Then, we proposed two capsule neural network models according to the length of the news statements that incorporate different word embedding models for fake news detection.
<<<Different variations of word embedding models>>>
Dense word representation can capture syntactic or semantic information from words. When word representations are demonstrated in low dimensional space, they are called word embedding. In these representations, words with similar meanings are in close position in the vector space.
In 2013, BIBREF20 proposed word2vec, which is a group of highly efficient computational models for learning word embeddings from raw text. These models are created by training neural networks with two-layers trained by a large volume of text. These models can produce vector representations for every word with several hundred dimensions in a vector space. In this space, words with similar meanings are mapped to close coordinates.
There are some pre-trained word2vec vectors like 'Google News' that was trained on 100 billion words from Google news. One of the popular methods to improve text processing performance is using these pre-trained vectors for initializing word vectors, especially in the absence of a large supervised training set. These distributed vectors can be fed into deep neural networks and used for any text classification task BIBREF21. These pre-trained embeddings, however, can further be enhanced.
BIBREF21 applied different learning settings for vector representation of words via word2vec for the first time and showed their superiority compared to the regular pre-trained embeddings when they are used within a CNN model. These settings are as follow:
Static word2vec model: in this model, pre-trained vectors are used as input to the neural network architecture, these vectors are kept static during training, and only the other parameters are learned.
Non-static word2vec model: this model uses the pre-trained vectors at the initialization of learning, but during the training phase, these vectors are fine-tuned for each task using the training data of the target task.
Multichannel word2vec model: the model uses two sets of static and non-static word2vec vectors, and a part of vectors fine-tune during training.
<<</Different variations of word embedding models>>>
<<<Proposed model>>>
Although different models based on deep neural networks have been proposed for fake news detection, there is still a great need for further improvements in this task. In the current research, we aim at using capsule neural networks to enhance the accuracy of fake news identification systems.
The capsule neural network was introduced by BIBREF22 for the first time in the paper called “Dynamic Routing Between Capsules”. In this paper, they showed that capsule network performance for MNIST dataset on highly overlapping digits could work better than CNNs. In computer vision, a capsule network is a neural network that tries to work inverse graphics. In a sense, the approach tries to reverse-engineer the physical process that produces an image of the world BIBREF23.
The capsule network is composed of many capsules that act like a function, and try to predict the instantiation parameters and presence of a particular object at a given location.
One key feature of capsule networks is equivariance, which aims at keeping detailed information about the location of the object and its pose throughout the network. For example, if someone rotates the image slightly, the activation vectors also change slightly BIBREF24. One of the limitations of a regular CNN is losing the precise location and pose of the objects in an image. Although this is not a challenging issue when classifying the whole image, it can be a bottleneck for image segmentation or object detection that needs precise location and pose. A capsule, however, can overcome this shortcoming in such applications BIBREF24.
Capsule networks have recently received significant attention. This model aims at improving CNNs and RNNs by adding the following capabilities to each source, and target node: (1) the source node has the capability of deciding about the number of messages to transfer to target nodes, and (2) the target node has the capability of deciding about the number of messages that may be received from different source nodes BIBREF25.
After the success of capsule networks in computer vision tasks BIBREF26, BIBREF27, BIBREF28, capsule networks have been used in different NLP tasks, including text classification BIBREF29, BIBREF30, multi-label text classification BIBREF31, sentiment analysis BIBREF18, BIBREF32, identifying aggression and toxicity in comments BIBREF33, and zero-shot user intent detection BIBREF34.
In capsule networks, the features that are extracted from the text are encapsulated into capsules (groups of neurons). The first work that applied capsule networks for text classification was done by BIBREF35. In their research, the performance of the capsule network as a text classification network was evaluated for the first time. Their capsule network architecture includes a standard convolutional layer called n-gram convolutional layer that works as a feature extractor. The second layer is a layer that maps scalar-valued features into a capsule representation and is called the primary capsule layer. The outputs of these capsules are fed to a convolutional capsule layer. In this layer, each capsule is only connected to a local region in the layer below. In the last step, the output of the previous layer is flattened and fed through a feed-forward capsule layer. For this layer, every capsule of the output is considered as a particular class. In this architecture, a max-margin loss is used for training the model. Figure FIGREF6 shows the architecture proposed by BIBREF35.
Some characteristics of capsules make them suitable for presenting a sentence or document as a vector for text classification. These characteristics include representing attributes of partial entities and expressing semantic meaning in a wide space BIBREF29.
For fake news identification with different length of statements, our model benefits from several parallel capsule networks and uses average pooling in the last stage. With this architecture, the models can learn more meaningful and extensive text representations on different n-gram levels according to the length of texts.
Depending on the length of the news statements, we use two different architectures. Figure FIGREF7 depicts the structure of the proposed model for medium or long news statements. In the model, a non-static word embedding is used as an embedding layer. In this layer, we use 'glove.6B.300d' as a pre-trained word embedding, and use four parallel networks by considering four different filter sizes 2,3,4,5 as n-gram convolutional layers for feature extraction. In the next layers, for each parallel network, there is a primary capsule layer and a convolutional capsule layer, respectively, as presented in Figure FIGREF6. A fully connected capsule layer is used in the last layer for each parallel network. At the end, the average polling is added for producing the final result.
For short news statements, due to the limitation of word sequences, a different structure has been proposed. The layers are like the first model, but only two parallel networks are considered with 3 and 5 filter sizes. In this model, a static word embedding is used. Figure FIGREF8 shows the structure of the proposed model for short news statements.
<<</Proposed model>>>
<<</Capsule networks for fake news detection>>>
<<<Evaluation>>>
<<<Dataset>>>
Several datasets have been introduced for fake news detection. One of the main requirements for using neural architectures is having a large dataset to train the model. In this paper, we use two datasets, namely ISOT fake news BIBREF17 and LIAR BIBREF15, which have a large number of documents for training deep models. The length of news statements for ISOT is medium or long, and LIAR is short.
<<<The ISOT fake news dataset>>>
In 2017, BIBREF17 introduced a new dataset that was collected from real-world sources. This dataset consists of news articles from Reuters.com and Kaggle.com for real news and fake news, respectively. Every instance in the dataset is longer than 200 characters. For each article, the following metadata is available: article type, article text, article title, article date, and article label (fake or real). Table TABREF12 shows the type and size of the articles for the real and fake categories.
<<</The ISOT fake news dataset>>>
<<<The LIAR dataset>>>
As mentioned in Section SECREF2, one of the recent well-known datasets, is provided by BIBREF15. BIBREF15 introduced a new large dataset called LIAR, which includes 12.8K human-labeled short statements from POLITIFACT.COM API. Each statement is evaluated by POLITIFACT.COM editor for its validity. Six fine-grained labels are considered for the degree of truthfulness, including pants-fire, false, barely-true, half-true, mostly-true, and true. The distribution of labels in this dataset are as follows: 1,050 pants-fire labels and a range of 2,063 to 2,638 for other labels.
In addition to news statements, this dataset consists of several metadata as speaker profiles for each news item. These metadata include valuable information about the subject, speaker, job, state, party, and total credit history count of the speaker of the news. The total credit history count, including the barely-true counts, false counts, half-true counts, mostly-true counts, and pants-fire counts. The statistics of LIAR dataset are shown in Table TABREF14. Some excerpt samples from the LIAR dataset are presented in Table TABREF15.
<<</The LIAR dataset>>>
<<</Dataset>>>
<<<Experimental setup>>>
The experiments of this paper were conducted on a PC with Intel Core i7 6700k, 3.40GHz CPU; 16GB RAM; Nvidia GeForce GTX 1080Ti GPU in a Linux workstation. For implementing the proposed model, the Keras library BIBREF36 was used, which is a high-level neural network API.
<<</Experimental setup>>>
<<<Evaluation metrics>>>
The evaluation metric in our experiments is the classification accuracy. Accuracy is the ratio of correct predictions to the total number of samples and is computed as:
Where TP is represents the number of True Positive results, FP represents the number of False Positive results, TN represents the number of True Negative results, and FN represents the number of False Negative results.
<<</Evaluation metrics>>>
<<</Evaluation>>>
<<<Results>>>
For evaluating the effectiveness of the proposed model, a series of experiments on two datasets were performed. These experiments are explained in this section and the results are compared to other baseline methods. We also discuss the results for every dataset separately.
<<<Classification for ISOT dataset>>>
As mentioned in Section SECREF4, BIBREF17 presented the ISOT dataset. According to the baseline paper, we consider 1000 articles for every set of real and fake articles, a total of 2000 articles for the test set, and the model is trained with the rest of the data.
First, the proposed model is evaluated with different word embeddings that described in Section SECREF1. Table TABREF20 shows the result of applying different word embeddings for the proposed model on ISOT, which consists of medium and long length news statements. The best result is achieved by applying the non-static embedding.
BIBREF17 evaluated different machine learning methods for fake news detection on the ISOT dataset, including the Support Vector Machine (SVM), the Linear Support Vector Machine (LSVM), the K-Nearest Neighbor (KNN), the Decision Tree (DT), the Stochastic Gradient Descent (SGD), and the Logistic regression (LR) methods.
Table TABREF21 shows the performance of non-static capsule network for fake news detection in comparison to other methods. The accuracy of our model is 7.8% higher than the best result achieved by LSVM.
<<</Classification for ISOT dataset>>>
<<<Discussion>>>
The proposed model can predict true labels with high accuracy reaching in a very small number of wrong predictions. Table TABREF23 shows the titles of two wrongly predicted samples for detecting fake news. To have an analysis on our results, we investigate the effects of sample words that are represented in training statements that tagged as real and fake separately.
For this work, all of the words and their frequencies are extracted from the two wrong samples and both real and fake labels of the training data. Table TABREF24 shows the information of this data. Then for every wrongly predicted sample, stop-words are omitted, and words with a frequency of more than two are listed. After that, all of these words and their frequency in real and fake training datasets are extracted. In this part, the frequencies of these words are normalized. Table TABREF25 and Table TABREF28 show the normalized frequencies of words for each sample respectably. In these tables, for ease of comparison, the normalized frequencies of real and fake labels of training data and the normalized frequency for each word in every wrong sample are multiplied by 10.
The label of Sample 1 is predicted as fake, but it is real. In Table TABREF25, six most frequent words of Sample 1 are listed, the word "tax" is presented 2 times more than each of the other words in Sample 1, and this word in the training data with real labels is obviously more frequent. In addition to this word, for other words like "state", the same observation exists.
The text of Sample 2 is predicted as real news, but it is fake. Table TABREF28 lists six frequent words of Sample 2. The two most frequent words of this text are "trump" and "sanders". These words are more frequent in training data with fake labels than the training data with real labels. "All" and "even" are two other frequent words, We use "even" to refer to something surprising, unexpected, unusual or extreme and "all" means every one, the complete number or amount or the whole. therefore, a text that includes these words has more potential to classify as a fake news. These experiments show the strong effect of the sample words frequency on the prediction of the labels.
<<</Discussion>>>
<<<Classification for the LIAR dataset>>>
As mentioned in Section SECREF13, the LIAR dataset is a multi-label dataset with short news statements. In comparison to the ISOT dataset, the classification task for this dataset is more challenging. We evaluate the proposed model while using different metadata, which is considered as speaker profiles. Table TABREF30 shows the performance of the capsule network for fake news detection by adding every metadata. The best result of the model is achieved by using history as metadata. The results show that this model can perform better than state-of-the-art baselines including hybrid CNN BIBREF15 and LSTM with attention BIBREF16 by 3.1% on the validation set and 1% on the test set.
<<</Classification for the LIAR dataset>>>
<<</Results>>>
<<<Conclusion>>>
In this paper, we apply capsule networks for fake news detection. We propose two architectures for different lengths of news statements. We apply two strategies to improve the performance of the capsule networks for the task. First, for detecting the medium or long length of news text, we use four parallel capsule networks that each one extracts different n-gram features (2,3,4,5) from the input texts. Second, we use non-static embedding such that the word embedding model is incrementally up-trained and updated in the training phase.
Moreover, as a fake news detector for short news statements, we use only two parallel networks with 3 and 5 filter sizes as a feature extractor and static model for word embedding. For evaluation, two datasets are used. The ISOT dataset as a medium length or long news text and LIAR as a short statement text. The experimental results on these two well-known datasets showed improvement in terms of accuracy by 7.8% on the ISOT dataset and 3.1% on the validation set and 1% on the test set of the LIAR dataset.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Introduction, Related work"
],
"type": "disordered_section"
}
|
2004.03788
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Satirical News Detection with Semantic Feature Extraction and Game-theoretic Rough Sets
<<<Abstract>>>
Satirical news detection is an important yet challenging task to prevent spread of misinformation. Many feature based and end-to-end neural nets based satirical news detection systems have been proposed and delivered promising results. Existing approaches explore comprehensive word features from satirical news articles, but lack semantic metrics using word vectors for tweet form satirical news. Moreover, the vagueness of satire and news parody determines that a news tweet can hardly be classified with a binary decision, that is, satirical or legitimate. To address these issues, we collect satirical and legitimate news tweets, and propose a semantic feature based approach. Features are extracted by exploring inconsistencies in phrases, entities, and between main and relative clauses. We apply game-theoretic rough set model to detect satirical news, in which probabilistic thresholds are derived by game equilibrium and repetition learning mechanism. Experimental results on the collected dataset show the robustness and improvement of the proposed approach compared with Pawlak rough set model and SVM.
<<</Abstract>>>
<<<Introduction>>>
Satirical news, which uses parody characterized in a conventional news style, has now become an entertainment on social media. While news satire is claimed to be pure comedic and of amusement, it makes statements on real events often with the aim of attaining social criticism and influencing change BIBREF0. Satirical news can also be misleading to readers, even though it is not designed for falsifications. Given such sophistication, satirical news detection is a necessary yet challenging natural language processing (NLP) task. Many feature based fake or satirical news detection systems BIBREF1, BIBREF2, BIBREF3 extract features from word relations given by statistics or lexical database, and other linguistic features. In addition, with the great success of deep learning in NLP in recent years, many end-to-end neural nets based detection systems BIBREF4, BIBREF5, BIBREF6 have been proposed and delivered promising results on satirical news article detection.
However, with the evolution of fast-paced social media, satirical news has been condensed into a satirical-news-in-one-sentence form. For example, one single tweet of “If earth continues to warm at current rate moon will be mostly underwater by 2400" by The Onion is largely consumed and spread by social media users than the corresponding full article posted on The Onion website. Existing detection systems trained on full document data might not be applicable to such form of satirical news. Therefore, we collect news tweets from satirical news sources such as The Onion, The New Yorker (Borowitz Report) and legitimate news sources such as Wall Street Journal and CNN Breaking News. We explore the syntactic tree of the sentence and extract inconsistencies between attributes and head noun in noun phrases. We also detect the existence of named entities and relations between named entities and noun phrases as well as contradictions between the main clause and corresponding prepositional phrase. For a satirical news, such inconsistencies often exist since satirical news usually combines irrelevant components so as to attain surprise and humor. The discrepancies are measured by cosine similarity between word components where words are represented by Glove BIBREF7. Sentence structures are derived by Flair, a state-of-the-art NLP framework, which better captures part-of-speech and named entity structures BIBREF8.
Due to the obscurity of satire genre and lacks of information given tweet form satirical news, there exists ambiguity in satirical news, which causes great difficulty to make a traditional binary decision. That is, it is difficult to classify one news as satirical or legitimate with available information. Three-way decisions, proposed by YY Yao, added an option - deferral decision in the traditional yes-and-no binary decisions and can be used to classify satirical news BIBREF9, BIBREF10. That is, one news may be classified as satirical, legitimate, and deferral. We apply rough sets model, particularly the game-theoretic rough sets to classify news into three groups, i.e., satirical, legitimate, and deferral. Game-theoretic rough set (GTRS) model, proposed by JT Yao and Herbert, is a recent promising model for decision making in the rough set context BIBREF11. GTRS determine three decision regions from a tradeoff perspective when multiple criteria are involved to evaluate the classification models BIBREF12. Games are formulated to obtain a tradeoff between involved criteria. The balanced thresholds of three decision regions can be induced from the game equilibria. GTRS have been applied in recommendation systems BIBREF13, medical decision making BIBREF14, uncertainty analysis BIBREF15, and spam filtering BIBREF16.
We apply GTRS model on our preprocessed dataset and divide all news into satirical, legitimate, or deferral regions. The probabilistic thresholds that determine three decision regions are obtained by formulating competitive games between accuracy and coverage and then finding Nash equilibrium of games. We perform extensive experiments on the collected dataset, fine-tuning the model by different discretization methods and variation of equivalent classes. The experimental result shows that the performance of the proposed model is superior compared with Pawlak rough sets model and SVM.
<<</Introduction>>>
<<<Related Work>>>
Satirical news detection is an important yet challenging NLP task. Many feature based models have been proposed. Burfoot et al. extracted features of headline, profanity, and slang using word relations given by statistical metrics and lexical database BIBREF1. Rubin et al. proposed a SVM based model with five features (absurdity, humor, grammar, negative affect, and punctuation) for fake news document detection BIBREF2. Yang et al. presented linguistic features such as psycholinguistic feature based on dictionary and writing stylistic feature from part-of-speech tags distribution frequency BIBREF17. Shu et al. gave a survey in which a set of feature extraction methods is introduced for fake news on social media BIBREF3. Conroy et al. also uses social network behavior to detect fake news BIBREF18. For satirical sentence classification, Davidov et al. extract patterns using word frequency and punctuation features for tweet sentences and amazon comments BIBREF19. The detection of a certain type of sarcasm which contracts positive sentiment with a negative situation by analyzing the sentence pattern with a bootstrapped learning was also discussed BIBREF20. Although word level statistical features are widely used, with advanced word representations and state-of-the-art part-of-speech tagging and named entity recognition model, we observe that semantic features are more important than word level statistical features to model performance. Thus, we decompose the syntactic tree and use word vectors to more precisely capture the semantic inconsistencies in different structural parts of a satirical news tweet.
Recently, with the success of deep learning in NLP, many researchers attempted to detect fake news with end-to-end neural nets based approaches. Ruchansky et al. proposed a hybrid deep neural model which processes both text and user information BIBREF5, while Wang et al. proposed a neural network model that takes both text and image data BIBREF6 for detection. Sarkar et al. presented a neural network with attention to both capture sentence level and document level satire BIBREF4. Some research analyzed sarcasm from non-news text. Ghosh and Veale BIBREF21 used both the linguistic context and the psychological context information with a bi-directional LSTM to detect sarcasm in users' tweets. They also published a feedback-based dataset by collecting the responses from the tweets authors for future analysis. While all these works detect fake news given full text or image content, or target on non-news tweets, we attempt bridge the gap and detect satirical news by analyzing news tweets which concisely summarize the content of news.
<<</Related Work>>>
<<<Methodology>>>
In this section, we will describe the composition and preprocessing of our dataset and introduce our model in detail. We create our dataset by collecting legitimate and satirical news tweets from different news source accounts. Our model aims to detect whether the content of a news tweet is satirical or legitimate. We first extract the semantic features based on inconsistencies in different structural parts of the tweet sentences, and then use these features to train game-theoretic rough set decision model.
<<<Dataset>>>
We collected approximately 9,000 news tweets from satirical news sources such as The Onion and Borowitz Report and about 11,000 news tweets from legitimate new sources such as Wall Street Journal and CNN Breaking News over the past three years. Each tweet is a concise summary of a news article. The duplicated and extreme short tweets are removed.A news tweet is labeled as satirical if it is written by satirical news sources and legitimate if it is from legitimate news sources. Table TABREF2 gives an example of tweet instances that comprise our dataset.
<<</Dataset>>>
<<<Semantic Feature Extraction>>>
Satirical news is not based on or does not aim to state the fact. Rather, it uses parody or humor to make statement, criticisms, or just amusements. In order to achieve such effect, contradictions are greatly utilized. Therefore, inconsistencies significantly exist in different parts of a satirical news tweet. In addition, there is a lack of entity or inconsistency between entities in news satire. We extracted these features at semantic level from different sub-structures of the news tweet. Different structural parts of the sentence are derived by part-of-speech tagging and named entity recognition by Flair. The inconsistencies in different structures are measured by cosine similarity of word phrases where words are represented by Glove word vectors. We explored three different aspects of inconsistency and designed metrics for their measurements. A word level feature using tf-idf BIBREF22 is added for robustness.
<<<Inconsistency in Noun Phrase Structures>>>
One way for a news satire to obtain surprise or humor effect is to combine irrelevant or less jointly used attributes and the head noun which they modified. For example, noun phrase such as “rampant accountability", “posthumous apology", “Vatican basement", “self-imposed mental construct" and other rare combinations are widely used in satirical news, while individual words themselves are common. To measure such inconsistency, we first select all leaf noun phrases (NP) extracted from the semantic trees to avoid repeated calculation. Then for each noun phrase, each adjacent word pair is selected and represented by 100-dim Glove word vector denoted as $(v_{t},w_{t})$. We define the averaged cosine similarity of noun phrase word pairs as:
where $T$ is a total number of word pairs. We use $S_{N\!P}$ as a feature to capture the overall inconsistency in noun phrase uses. $S_{N\!P}$ ranges from -1 to 1, where a smaller value indicates more significant inconsistency.
<<</Inconsistency in Noun Phrase Structures>>>
<<<Inconsistency Between Clauses>>>
Another commonly used rhetoric approach for news satire is to make contradiction between the main clause and its prepositional phrase or relative clause. For instance, in the tweet “Trump boys counter Chinese currency manipulation $by$ adding extra zeros To $20 Bills.", contradiction or surprise is gained by contrasting irrelevant statements provided by different parts of the sentence. Let $q$ and $p$ denote two clauses separated by main/relative relation or preposition, and $(w_{1},w_{1},... w_{q})$ and $(v_{1},v_{1},... v_{p})$ be the vectorized words in $q$ and $p$. Then we define inconsistency between $q$ and $p$ as:
Similarly, the feature $S_{Q\!P}$ is measured by cosine similarity of linear summations of word vectors, where smaller value indicates more significant inconsistency.
<<</Inconsistency Between Clauses>>>
<<<Inconsistency Between Named Entities and Noun Phrases>>>
Even though many satirical news tweets are made based on real persons or events, most of them lack specific entities. Rather, because the news is fabricated, news writers use the words such as “man",“woman",“local man", “area woman",“local family" as subject. However, the inconsistency between named entities and noun phrases often exists in a news satire if a named entity is included. For example, the named entity “Andrew Yang" and the noun phrases “time vortex" show great inconsistency than “President Trump", "Senate Republicans", and “White House" do in the legitimate news “President Trump invites Senate Republicans to the White House to talk about the funding bill." We define such inconsistency as a categorical feature that:
$S_{N\! E\! R\! N}$ is the cosine similarity of named entities and noun phrases of a certain sentence and $\bar{S}_{N\! E\! R\! N}$ is the mean value of $S_{N\! E\! R\! N}$ in corpus.
<<</Inconsistency Between Named Entities and Noun Phrases>>>
<<<Word Level Feature Using TF-IDF>>>
We calculated the difference of tf-idf scores between legitimate news corpus and satirical news corpus for each single word. Then, the set $S_{voc}$ that includes most representative legitimate news words is created by selecting top 100 words given the tf-idf difference. For a news tweet and any word $w$ in the tweet, we define the binary feature $B_{voc}$ as:
<<</Word Level Feature Using TF-IDF>>>
<<</Semantic Feature Extraction>>>
<<<GTRS Decision Model>>>
We construct a Game-theoretic Rough Sets model for classification given the extracted features. Suppose $E\subseteq U \times U$ is an equivalence relation on a finite nonempty universe of objects $U$, where $E$ is reflexive, symmetric, and transitive. The equivalence class containing an object $x$ is given by $[x]=\lbrace y\in U|xEy\rbrace $. The objects in one equivalence class all have the same attribute values. In the satirical news context, given an undefined concept $satire$, probabilistic rough sets divide all news into three pairwise disjoint groups i.e., the satirical group $POS(satire)$, legitimate group $NEG(satire)$, and deferral group $BND(satire)$, by using the conditional probability $Pr(satire|[x]) = \frac{|satire\cap [x]|}{|[x]|}$ as the evaluation function, and $(\alpha ,\beta )$ as the acceptance and rejection thresholds BIBREF23, BIBREF9, BIBREF10, that is,
Given an equivalence class $[x]$, if the conditional probability $Pr(satire|[x])$ is greater than or equal to the specified acceptance threshold $\alpha $, i.e., $Pr(satire|[x])\ge \alpha $, we accept the news in $[x]$ as $satirical$. If $Pr(satire|[x])$ is less than or equal to the specified rejection threshold $\beta $, i.e., $Pr(satire|[x])\le \beta $ we reject the news in $[x]$ as $satirical$, or we accept the news in $[x]$ as $legitimate$. If $Pr(satire|[x])$ is between $\alpha $ and $\beta $, i.e., $\beta <Pr(satire|[x])<\alpha $, we defer to make decisions on the news in $[x]$. Pawlak rough sets can be viewed as a special case of probabilistic rough sets with $(\alpha ,\beta )=(1,0)$.
Given a pair of probabilistic thresholds $(\alpha , \beta )$, we can obtain a news classifier according to Equation (DISPLAY_FORM13). The three regions are a partition of the universe $U$,
Then, the accuracy and coverage rate to evaluate the performance of the derived classifier are defined as follows BIBREF12,
The criterion coverage indicates the proportions of news that can be confidently classified. Next, we will obtain $(\alpha , \beta )$ by game formulation and repetition learning.
<<<Game Formulation>>>
We construct a game $G=\lbrace O,S,u\rbrace $ given the set of game players $O$, the set of strategy profile $S$, and the payoff functions $u$, where the accuracy and coverage are two players, respectively, i.e., $O=\lbrace acc, cov\rbrace $.
The set of strategy profiles $S=S_{acc}\times S_{cov}$, where $S_{acc}$ and $S_{cov} $ are sets of possible strategies or actions performed by players $acc$ and $cov$. The initial thresholds are set as $(1,0)$. All these strategies are the changes made on the initial thresholds,
$c_{acc}$ and $c_{cov}$ denote the change steps used by two players, and their values are determined by the concrete experiment date set.
Payoff functions. The payoffs of players are $u=(u_{acc},u_{cov})$, and $u_{acc}$ and $u_{cov}$ denote the payoff functions of players $acc$ and $cov$, respectively. Given a strategy profile $p=(s, t)$ with player $acc$ performing $s$ and player $cov$ performing $t$, the payoffs of $acc$ and $cov$ are $u_{acc}(s, t)$ and $u_{cov}(s, t)$. We use $u_{acc}(\alpha ,\beta )$ and $u_{cov}(\alpha ,\beta )$ to show this relationship. The payoff functions $u_{acc}(\alpha ,\beta )$ and $u_{cov}(\alpha ,\beta )$ are defined as,
where $Acc_{(\alpha , \beta )}(Satire)$ and $Cov_{(\alpha , \beta )}(Satire)$ are the accuracy and coverage defined in Equations (DISPLAY_FORM15) and (DISPLAY_FORM16).
Payoff table. We use payoff tables to represent the formulated game. Table TABREF20 shows a payoff table example in which both players have 3 strategies defined in Equation refeq:stategies.
The arrow $\downarrow $ denotes decreasing a value and $\uparrow $ denotes increasing a value. On each cell, the threshold values are determined by two players.
<<</Game Formulation>>>
<<<Repetition Learning Mechanism>>>
We repeat the game with the new thresholds until a balanced solution is reached. We first analyzes the pure strategy equilibrium of the game and then check if the stopping criteria are satisfied.
Game equilibrium. The game solution of pure strategy Nash equilibrium is used to determine possible game outcomes in GTRS. The strategy profile $(s_{i},t_{j})$ is a pure strategy Nash equilibrium, if
This means that none of players would like to change his strategy or they would loss benefit if deriving from this strategy profile, provided this player has the knowledge of other player's strategy.
Repetition of games. Assuming that we formulate a game, in which the initial thresholds are $(\alpha , \beta )$, and the equilibrium analysis shows that the thresholds corresponding to the equilibrium are $(\alpha ^{*}, \beta ^{*})$. If the thresholds $(\alpha ^{*}, \beta ^{*})$ do not satisfy the stopping criterion, we will update the initial thresholds in the subsequent games. The initial thresholds of the new game will be set as $(\alpha ^{*}, \beta ^{*})$. If the thresholds $(\alpha ^{*}, \beta ^{*})$ satisfy the stopping criterion, we may stop the repetition of games.
Stopping criterion. We define the stopping criteria so that the iterations of games can stop at a proper time. In this research, we set the stopping criterion as within the range of thresholds, the increase of one player's payoff is less than the decrease of the other player's payoff.
<<</Repetition Learning Mechanism>>>
<<</GTRS Decision Model>>>
<<</Methodology>>>
<<<Experiments>>>
There are 8757 news records in our preprocessed data set. We use Jenks natural breaks BIBREF24 to discretize continuous variables $S_{N\!P}$ and $S_{Q\!P}$ both into five categories denoted by nominal values from 0 to 4, where larger values still fall into bins with larger nominal value. Let $D_{N\!P}$ and $D_{Q\!P}$ denote the discretized variables $S_{N\!P}$ and $S_{Q\!P}$, respectively. We derived the information table that only contains discrete features from our original dataset. A fraction of the information table is shown in Table TABREF23.
The news whose condition attributes have the same values are classified in an equivalence class $X_i$. We derived 149 equivalence classes and calculated the corresponding probability $Pr(X_i)$ and condition probability $Pr(Satire|X_i)$ for each $X_i$. The probability $Pr(X_{i})$ denotes the ratio of the number of news contained in the equivalence class $X_i$ to the total number of news in the dataset, while the conditional probability $Pr(Satire|X_{i})$ is the proportion of news in $X_i$ that are satirical. We combine the equivalence classes with the same conditional probability and reduce the number of equivalence classes to 108. Table TABREF24 shows a part of the probabilistic data information about the concept satire.
<<<Finding Thresholds with GTRS>>>
We formulated a competitive game between the criteria accuracy and coverage to obtain the balanced probabilistic thresholds with the initial thresholds $(\alpha , \beta )=(1,0)$ and learning rate 0.03. As shown in the payoff table Table TABREF26,
the cell at the right bottom corner is the game equilibrium whose strategy profile is ($\beta $ increases 0.06, $\alpha $ decreases 0.06). The payoffs of the players are (0.9784,0.3343). We set the stopping criterion as the increase of one player's payoff is less than the decrease of the other player's payoff when the thresholds are within the range. When the thresholds change from (1,0) to (0.94, 0.06), the accuracy is decreased from 1 to 0.9784 but the coverage is increased from 0.0795 to 0.3343. We repeat the game by setting $(0.94, 0.06)$ as the next initial thresholds.
The competitive games are repeated seven times. The result is shown in Table TABREF27.
After the eighth iteration, the repetition of game is stopped because the further changes on thresholds may cause the thresholds lay outside of the range $0 < \beta < \alpha <1$, and the final result is the equilibrium of the seventh game $(\alpha , \beta )=(0.52, 0.48)$.
<<</Finding Thresholds with GTRS>>>
<<<Results>>>
We compare Pawlak rough sets, SVM, and our GTRS approach on the proposed dataset. Table TABREF29 shows the results on the experimental data.
The SVM classifier achieved an accuracy of $78\%$ with a $100\%$ coverage. The Pawlak rough set model using $(\alpha , \beta )=(1,0)$ achieves a $100\%$ accuracy and a coverage ratio of $7.95\%$, which means it can only classify $7.95\%$ of the data. The classifier constructed by GTRS with $(\alpha , \beta )=(0.52, 0.48)$ reached an accuracy $82.71\%$ and a coverage $97.49\%$. which indicates that $97.49\%$ of data are able to be classified with accuracy of $82.71\%$. The remaining $2.51\%$ of data can not be classified without providing more information. To make our method comparable to other baselines such as SVM, we assume random guessing is made on the deferral region and present the modified accuracy. The modified accuracy for our approach is then $0.8271\times 0.9749 + 0.5 \times 0.0251 =81.89\%$. Our methods shows significant improvement as compared to Pawlak model and SVM.
<<</Results>>>
<<</Experiments>>>
<<<Conclusion>>>
In this paper, we propose a satirical news detection approach based on extracted semantic features and game-theoretic rough sets. In our mode, the semantic features extraction captures the inconsistency in the different structural parts of the sentences and the GTRS classifier can process the incomplete information based on repetitive learning and the acceptance and rejection thresholds. The experimental results on our created satirical and legitimate news tweets dataset show that our model significantly outperforms Pawlak rough set model and SVM. In particular, we demonstrate our model's ability to interpret satirical news detection from a semantic and information trade-off perspective. Other interesting extensions of our paper may be to use rough set models to extract the linguistic features at document level.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Conclusion, Methodology"
],
"type": "disordered_section"
}
|
1910.10869
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Combining Acoustics, Content and Interaction Features to Find Hot Spots in Meetings
<<<Abstract>>>
Involvement hot spots have been proposed as a useful concept for meeting analysis and studied off and on for over 15 years. These are regions of meetings that are marked by high participant involvement, as judged by human annotators. However, prior work was either not conducted in a formal machine learning setting, or focused on only a subset of possible meeting features or downstream applications (such as summarization). In this paper we investigate to what extent various acoustic, linguistic and pragmatic aspects of the meetings can help detect hot spots, both in isolation and jointly. In this context, the openSMILE toolkit \cite{opensmile} is to used to extract features based on acoustic-prosodic cues, BERT word embeddings \cite{BERT} are used for modeling the lexical content, and a variety of statistics based on the speech activity are used to describe the verbal interaction among participants. In experiments on the annotated ICSI meeting corpus, we find that the lexical modeling part is the most informative, with incremental contributions from interaction and acoustic-prosodic model components.
<<</Abstract>>>
<<<Introduction and Prior Work>>>
A definition of the meeting “hot spots” was first introduced in BIBREF2, where it was investigated whether human annotators could reliably identify regions in which participants are “highly involved in the discussion”. The motivation was that meetings generally have low information density and are tedious to review verbatim after the fact. An automatic system that could detect regions of high interest (as indicated by the involvement of the participants during the meeting) would thus be useful. Relatedly, automatic meeting summarization could also benefit from such information to give extra weight to hot spot regions in selecting or abstracting material for inclusion in the summary. Later work on the relationship between involvement and summarization BIBREF3 defined a different approach: hot spots are those regions chosen for inclusion in a summary by human annotators (“summarization hot spots”). In the present work we stick with the original “involvement hot spot” notion, and refer to such regions simply as “hot spots”, regardless of their possible role in summarization. We note that high involvement may be triggered both by a meeting's content (“what is being talked about”, and “what may be included in a textual summary”), as well as behavioral and social factors, such as a desire to participate, to stake out a position, or to oppose another participant. As related notion in dialog system research is “level of interest” BIBREF4.
The initial research on hot spots focused on the reliability of human annotators and correlations with certain low-level acoustic features, such as pitch BIBREF2. Also investigated were the correlation between hot spots and dialog acts BIBREF5 and hot spots and speaker overlap BIBREF6, without however conducting experiments in automatic hot spot prediction using machine learning techniques. Laskowski BIBREF7 redefined the hot spot annotations in terms of time-based windows over meetings, and investigated various classifier models to detect “hotness” (i.e., elevated involvement). However, that work focused on only two types of speech features: presence of laughter and the temporal patterns of speech activity across the various participants, both of which were found to be predictive of involvement.
For the related problem of level-of-interest prediction in dialog systems BIBREF8, it was found that content-based classification can also be effective, using both a discriminative TF-IDF model and lexical affect scores, as well as prosodic features. In line with the earlier hot spot research on interaction patterns and speaker overlap, turn-taking features were shown to be helpful for spotting summarization hot spots, in BIBREF3, and even more so than the human involvement annotations. The latter result confirms our intuition that summarization-worthiness and involvement are different notions of “hotness”.
In this paper, following Laskowski, we focus on the automatic prediction of the speakers' involvement in sliding-time windows/segments. We evaluate machine learning models based on a range of features that can be extracted automatically from audio recordings, either directly via signal processing or via the use of automatic transcriptions (ASR outputs). In particular, we investigate the relative contributions of three classes of information:
low-level acoustic-prosodic features, such as those commonly used in other paralinguistic tasks, such as sentiment analysis (extracted using openSMILE BIBREF0);
spoken word content, as encoded with a state-of-the-art lexical embedding approach such as BERT BIBREF1;
speaker interaction, based on speech activity over time and across different speakers.
We attach lower importance to laughter, even though it was found to be highly predictive of involvement in the ICSI corpus, partly because we believe it would not transfer well to more general types of (e.g., business) meetings, and partly because laughter detection is still a hard problem in itself BIBREF9. Generation of speaker-attributed meeting transcriptions, on the other hand, has seen remarkable progress BIBREF10 and could support the features we focus on here.
<<</Introduction and Prior Work>>>
<<<Data>>>
The ICSI Meeting Corpus BIBREF11 is a collection of meeting recordings that has been thoroughly annotated, including annotations for involvement hot spots BIBREF12, linguistic utterance units, and word time boundaries based on forced alignment. The dataset is comprised of 75 meetings and about 70 hours of real-time audio duration, with 6 speakers per meeting on average. Most of the participants are well-acquainted and friendly with each other. Hot spots were originally annotated with 8 levels and degrees, ranging from `not hot' to `luke warm' to `hot +'. Every utterance was labeled with one of these discrete labels by a single annotator. Hightened involvement is rare, being marked on only 1% of utterances.
Due to the severe imbalance in the label distribution, Laskowski BIBREF13 proposed extending the involvement, or hotness, labels to sliding time windows. In our implementation (details below), this resulted in 21.7% of samples (windows) being labeled as “involved”.
We split the corpus into three subsets: training, development, and evaluation, keeping meetings intact. Table TABREF4 gives statistics of these partitions.
We were concerned with the relatively small number of meetings in the test sets, and repeated several of our experiments with a (jackknifing) cross-validation setup over the training set. The results obtained were very similar to those with the fixed train/test split results that we report here.
<<<Time Windowing>>>
As stated above, the corpus was originally labeled for hot spots at the utterance level, where involvement was marked by either a `b' or a `b+' label. Training and test samples for our experiments correspond to 60 s-long sliding windows, with a 15 s step size. If a certain window, e.g., a segment spanning the times 15 s ...75 s, overlaps with any involved speech utterance, then we label that whole window as `hot'. Fig. FIGREF6 gives a visual representation.
<<</Time Windowing>>>
<<<Metric>>>
In spite of the windowing approach, the class distribution is still skewed, and an accuracy metric would reflect the particular class distribution in our data set. Therefore, we adopt the unweighted average recall (UAR) metric commonly used in emotion classification research. UAR is a reweighted accuracy where the samples of both classes are weighted equally in aggregate. UAR thus simulates a uniform class distribution. To match the objective, our classifiers are trained on appropriately weighted training data. Note that chance performance for UAR is by definition 50%, making results more comparable across different data sets.
<<</Metric>>>
<<</Data>>>
<<<Feature Description>>>
<<<Acoustic-Prosodic Features>>>
Prosody encompasses pitch, energy, and durational features of speech. Prosody is thought to convey emphasis, sentiment, and emotion, all of which are presumably correlated with expressions of involvement. We used the openSMILE toolkit BIBREF0 to compute 988 features as defined by the emobase988 configuration file, operating on the close-talking meeting recordings. This feature set consists of low-level descriptors such as intensity, loudness, Mel-frequency cepstral coefficients, and pitch. For each low-level descriptor, functionals such as max/min value, mean, standard deviation, kurtosis, and skewness are computed. Finally, global mean and variance normalization are applied to each feature, using training set statistics. The feature vector thus captures acoustic-prosodic features aggregated over what are typically utterances. We tried extracting openSMILE features directly from 60 s windows, but found better results by extracting subwindows of 5 s, followed by pooling over the longer 60 s duration. We attribute this to the fact that emobase features are designed to operate on individual utterances, which have durations closer to 5 s than 60 s.
<<</Acoustic-Prosodic Features>>>
<<<Word-Based Features>>>
<<<Bag of words with TF-IDF>>>
Initially, we investigated a simple bag-of-words model including all unigrams, bigrams, and trigrams found in the training set. Occurrences of the top 10,000 n-grams were encoded to form a 10,000-dimensional vector, with values weighted according to TD-IDF. TF-IDF weights n-grams according to both their frequency (TF) and their salience (inverse document frequency, IDF) in the data, where each utterance was treated as a separate document. The resulting feature vectors are very sparse.
<<</Bag of words with TF-IDF>>>
<<<Embeddings>>>
The ICSI dataset is too small to train a neural embedding model from scratch. Therefore, it is convenient to use the pre-trained BERT embedding architecture BIBREF1 to create an utterance-level embedding vector for each region of interest. Having been trained on a large text corpus, the resulting embeddings encode semantic similarities among utterances, and would enable generalization from word patterns seen in the ICSI training data to those that have not been observed on that limited corpus.
We had previously also created an adapted version of the BERT model, tuned to to perform utterance-level sentiment classification, on a separate dataset BIBREF14. As proposed in BIBREF1, we fine-tuned all layers of the pre-trained BERT model by adding a single fully-connected layer and classifying using only the embedding corresponding to the classification ([CLS]) token prepended to each utterance. The difference in UAR between the hot spot classifiers using the pre-trained embeddings and those using the sentiment-adapted embeddings is small. Since the classifier using embeddings extracted by the sentiment-adapted model yielded slightly better performance, we report all results using these as input.
To obtain a single embedding for each 60 s window, we experimented with various approaches of pooling the token and utterance-level embeddings. For our first approach, we ignored the ground-truth utterance segmentation and speaker information. We merged all words spoken within a particular window into a single contiguous span. Following BIBREF1, we added the appropriate classification and separation tokens to the text and selected the embedding corresponding to the [CLS] token as the window-level embedding. Our second approach used the ground-truth segmentation of the dialogue. Each speaker turn was independently modeled, and utterance-level embeddings were extracted using the representation corresponding to the [CLS] token. Utterances that cross window boundaries are truncated using the word timestamps, so only words spoken within the given time window are considered. For all reported experiments, we use L2-norm pooling to form the window-level embeddings for the final classifier, as this performed better than either mean or max pooling.
<<</Embeddings>>>
<<</Word-Based Features>>>
<<<Speaker Activity Features>>>
These features were a compilation of three different feature types:
Speaker overlap percentages: Based on the available word-level times, we computed a 6-dimensional feature vector, where the $i$th index indicates the fraction of time that $i$ or more speakers are talking within a given window. This can be expressed by $\frac{t_i}{60}$ with $t_i$ indicating the time in seconds that $i$ or more people were speaking at the same time.
Unique speaker count: Counts the unique speakers within a window, as a useful metric to track the diversity of participation within a certain window.
Turn switch count: Counts the number of times a speaker begins talking within a window. This is a similar metric to the number of utterances. However, unlike utterance count, turn switches can be computed entirely from speech activity, without requiring a linguistic segmentation.
<<</Speaker Activity Features>>>
<<<Laughter Count>>>
Laskowski found that laughter is highly predictive of involvement in the ICSI data. Laughter is annotated on an utterance level and falls into two categories: laughter solely on its own (no words) or laughter contained within an utterance (i.e. during speech). The feature is a simple tally of the number of times people laughed within a window. We include it in some of our experiments for comparison purposes, though we do not trust it as general feature. (The participants in the ICSI meetings are far too familiar and at ease with each other to be representative with regards to laughter.)
<<</Laughter Count>>>
<<</Feature Description>>>
<<<Modeling>>>
<<<Non-Neural Models>>>
In preliminary experiments, we compared several non-neural classifiers, including logistic regression (LR), random forests, linear support vector machines, and multinomial naive Bayes. Logistic regression gave the best results all around, and we used it exclusively for the results shown here, unless neural networks are used instead.
<<</Non-Neural Models>>>
<<<Feed-Forward Neural Networks>>>
<<<Pooling Techniques>>>
For BERT and openSMILE vector classification, we designed two different feed-forward neural network architectures. The sentiment-adapted embeddings described in Section SECREF3 produce one 1024-dimensional vector per utterance. Since all classification operates on time windows, we had to pool over all utterances falling withing a given window, taking care to truncate words falling outside the window. We tested four pooling methods: L2-norm, mean, max, and min, with L2-norm giving the best results.
As for the prosodic model, each vector extracted from openSMILE represents a 5 s interval. Since there was both a channel/speaker-axis and a time-axis, we needed to pool over both dimensions in order to have a single vector representing the prosodic features of a 60 s window. The second to last layer is the pooling layer, max-pooling across all the channels, and then mean-pooling over time. The output of the pooling layer is directly fed into the classifier.
<<</Pooling Techniques>>>
<<<Hyperparameters>>>
The hyperparameters of the neural networks (hidden layer number and sizes) were also tuned in preliminary experiments. Details are given in Section SECREF5.
<<</Hyperparameters>>>
<<</Feed-Forward Neural Networks>>>
<<<Model Fusion>>>
Fig. FIGREF19 depicts the way features from multiple categories are combined. Speech activity and word features are fed directly into a final LR step. Acoustic-prosodic features are first combined in a feed-forward neural classifier, whose output log posteriors are in turn fed into the LR step for fusion. (When using only prosodic features, the ANN outputs are used directly.)
<<</Model Fusion>>>
<<</Modeling>>>
<<<Experiments>>>
We group experiments by the type of feaures they are based on: acoustic-prosodic, word-based, and speech activity, evaluating each group first by itself, and then in combination with others.
<<<Speech Feature Results>>>
As discussed in Section SECREF3, a multitude of input features were investigated, with some being more discriminative. The most useful speech activity features were speaker overlap percentage, number of unique speakers, and number of turn switches, giving evaluation set UARs of 63.5%, 63.9%, and 66.6%, respectively. When combined the UAR improved to 68.0%, showing that these features are partly complementary.
<<</Speech Feature Results>>>
<<<Word-Based Results>>>
The TF-IDF model alone gave a UAR of 59.8%. A drastic increase in performance to 70.5% was found when using the BERT embeddings instead. Therefore we adopted embeddings for all further experiments based on word information.
Three different types of embeddings were investigated, i.e. sentiment-adapted embeddings at an utterance-level, unadapted embeddings at the utterance-level, and unadapted embeddings over time windows.
The adapted embeddings (on utterances) performed best, indicating that adaptation to sentiment task is useful for involvement classification. It is important to note, however, that the utterance-level embeddings are larger than the window-level embeddings. This is due to there being more utterances than windows in the meeting corpus.
The best neural architecture we found for these embeddings is a 5-layer neural network with sizes 1024-64-32-12-2. Other hyperparameters for this model are dropout rate = 0.4, learning rate = $10^{-7}$ and activation function “tanh”. The UAR on the evaluation set with just BERT embeddings as input is 65.2%.
Interestingly, the neural model was outperformed by a LR directly on the embedding vectors. Perhaps the neural network requires further fine-tuning, or the neural model is too prone to overfitting, given the small training corpus. In any case, we use LR on embeddings for all subsequent results.
<<</Word-Based Results>>>
<<<Acoustic-Prosodic Feature Results>>>
Our prosodic model is a 5-layer ANN, as described in Section SECREF15. The architecture is: 988-512-128-16-Pool-2. The hyperparameters are: dropout rate = 0.4, learning rate = $10^{-7}$, activation = “tanh". The UAR on the evaluation set with just openSMILE features is 62.0%.
<<</Acoustic-Prosodic Feature Results>>>
<<<Fusion Results and Discussion>>>
Table TABREF24 gives the UAR for each feature subset individually, for all features combined, and for a combination in which one feature subset in turn is left out. The one-feature-set-at-time results suggest that prosody, speech activity and words are of increasing importance in that order. The leave-one-out analysis agrees that the words are the most important (largest drop in accuracy when removed), but on that criterion the prosodic features are more important than speech-activity. The combination of all features is 0.4% absolute better than any other subset, showing that all feature subsets are partly complementary.
Fig. FIGREF25 shows the same results in histogram form, but also add those with laughter information. Laughter count by itself is the strongest cue to involvement, as Laskowski BIBREF7 had found. However, even given the strong individual laughter feature, the other features add information, pushing the UAR from from 75.1% to 77.5%.
<<</Fusion Results and Discussion>>>
<<</Experiments>>>
<<<Conclusion>>>
We studied detection of areas of high involvement, or “hot spots”, within meetings using the ICSI corpus. The features that yielded the best results are in line with our intuitions. Word embeddings, speech activity features such a number of turn changes, and prosodic features are all plausible indicators of high involvement. Furthermore, the feature sets are partly complementary and yield best results when combined using a simple logistic regression model. The combined model achieves 72.6% UAR, or 77.5% with laughter feature.
For future work, we would want to see a validation on an independent meeting collection, such as business meetings. Some features, in particular laughter, are bound not be as useful in this case. More data could also enable the training of joint models that perform an early fusion of the different feature types. Also, the present study still relied on human transcripts, and it would be important to know how much UAR suffers with a realistic amount of speech recognition error. Transcription errors are expected to boost the importance of the features types that do not rely on words.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Experiments, Abstract"
],
"type": "disordered_section"
}
|
1910.10869
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Combining Acoustics, Content and Interaction Features to Find Hot Spots in Meetings
<<<Abstract>>>
Involvement hot spots have been proposed as a useful concept for meeting analysis and studied off and on for over 15 years. These are regions of meetings that are marked by high participant involvement, as judged by human annotators. However, prior work was either not conducted in a formal machine learning setting, or focused on only a subset of possible meeting features or downstream applications (such as summarization). In this paper we investigate to what extent various acoustic, linguistic and pragmatic aspects of the meetings can help detect hot spots, both in isolation and jointly. In this context, the openSMILE toolkit \cite{opensmile} is to used to extract features based on acoustic-prosodic cues, BERT word embeddings \cite{BERT} are used for modeling the lexical content, and a variety of statistics based on the speech activity are used to describe the verbal interaction among participants. In experiments on the annotated ICSI meeting corpus, we find that the lexical modeling part is the most informative, with incremental contributions from interaction and acoustic-prosodic model components.
<<</Abstract>>>
<<<Introduction and Prior Work>>>
A definition of the meeting “hot spots” was first introduced in BIBREF2, where it was investigated whether human annotators could reliably identify regions in which participants are “highly involved in the discussion”. The motivation was that meetings generally have low information density and are tedious to review verbatim after the fact. An automatic system that could detect regions of high interest (as indicated by the involvement of the participants during the meeting) would thus be useful. Relatedly, automatic meeting summarization could also benefit from such information to give extra weight to hot spot regions in selecting or abstracting material for inclusion in the summary. Later work on the relationship between involvement and summarization BIBREF3 defined a different approach: hot spots are those regions chosen for inclusion in a summary by human annotators (“summarization hot spots”). In the present work we stick with the original “involvement hot spot” notion, and refer to such regions simply as “hot spots”, regardless of their possible role in summarization. We note that high involvement may be triggered both by a meeting's content (“what is being talked about”, and “what may be included in a textual summary”), as well as behavioral and social factors, such as a desire to participate, to stake out a position, or to oppose another participant. As related notion in dialog system research is “level of interest” BIBREF4.
The initial research on hot spots focused on the reliability of human annotators and correlations with certain low-level acoustic features, such as pitch BIBREF2. Also investigated were the correlation between hot spots and dialog acts BIBREF5 and hot spots and speaker overlap BIBREF6, without however conducting experiments in automatic hot spot prediction using machine learning techniques. Laskowski BIBREF7 redefined the hot spot annotations in terms of time-based windows over meetings, and investigated various classifier models to detect “hotness” (i.e., elevated involvement). However, that work focused on only two types of speech features: presence of laughter and the temporal patterns of speech activity across the various participants, both of which were found to be predictive of involvement.
For the related problem of level-of-interest prediction in dialog systems BIBREF8, it was found that content-based classification can also be effective, using both a discriminative TF-IDF model and lexical affect scores, as well as prosodic features. In line with the earlier hot spot research on interaction patterns and speaker overlap, turn-taking features were shown to be helpful for spotting summarization hot spots, in BIBREF3, and even more so than the human involvement annotations. The latter result confirms our intuition that summarization-worthiness and involvement are different notions of “hotness”.
In this paper, following Laskowski, we focus on the automatic prediction of the speakers' involvement in sliding-time windows/segments. We evaluate machine learning models based on a range of features that can be extracted automatically from audio recordings, either directly via signal processing or via the use of automatic transcriptions (ASR outputs). In particular, we investigate the relative contributions of three classes of information:
low-level acoustic-prosodic features, such as those commonly used in other paralinguistic tasks, such as sentiment analysis (extracted using openSMILE BIBREF0);
spoken word content, as encoded with a state-of-the-art lexical embedding approach such as BERT BIBREF1;
speaker interaction, based on speech activity over time and across different speakers.
We attach lower importance to laughter, even though it was found to be highly predictive of involvement in the ICSI corpus, partly because we believe it would not transfer well to more general types of (e.g., business) meetings, and partly because laughter detection is still a hard problem in itself BIBREF9. Generation of speaker-attributed meeting transcriptions, on the other hand, has seen remarkable progress BIBREF10 and could support the features we focus on here.
<<</Introduction and Prior Work>>>
<<<Data>>>
The ICSI Meeting Corpus BIBREF11 is a collection of meeting recordings that has been thoroughly annotated, including annotations for involvement hot spots BIBREF12, linguistic utterance units, and word time boundaries based on forced alignment. The dataset is comprised of 75 meetings and about 70 hours of real-time audio duration, with 6 speakers per meeting on average. Most of the participants are well-acquainted and friendly with each other. Hot spots were originally annotated with 8 levels and degrees, ranging from `not hot' to `luke warm' to `hot +'. Every utterance was labeled with one of these discrete labels by a single annotator. Hightened involvement is rare, being marked on only 1% of utterances.
Due to the severe imbalance in the label distribution, Laskowski BIBREF13 proposed extending the involvement, or hotness, labels to sliding time windows. In our implementation (details below), this resulted in 21.7% of samples (windows) being labeled as “involved”.
We split the corpus into three subsets: training, development, and evaluation, keeping meetings intact. Table TABREF4 gives statistics of these partitions.
We were concerned with the relatively small number of meetings in the test sets, and repeated several of our experiments with a (jackknifing) cross-validation setup over the training set. The results obtained were very similar to those with the fixed train/test split results that we report here.
<<<Time Windowing>>>
As stated above, the corpus was originally labeled for hot spots at the utterance level, where involvement was marked by either a `b' or a `b+' label. Training and test samples for our experiments correspond to 60 s-long sliding windows, with a 15 s step size. If a certain window, e.g., a segment spanning the times 15 s ...75 s, overlaps with any involved speech utterance, then we label that whole window as `hot'. Fig. FIGREF6 gives a visual representation.
<<</Time Windowing>>>
<<<Metric>>>
In spite of the windowing approach, the class distribution is still skewed, and an accuracy metric would reflect the particular class distribution in our data set. Therefore, we adopt the unweighted average recall (UAR) metric commonly used in emotion classification research. UAR is a reweighted accuracy where the samples of both classes are weighted equally in aggregate. UAR thus simulates a uniform class distribution. To match the objective, our classifiers are trained on appropriately weighted training data. Note that chance performance for UAR is by definition 50%, making results more comparable across different data sets.
<<</Metric>>>
<<</Data>>>
<<<Feature Description>>>
<<<Acoustic-Prosodic Features>>>
Prosody encompasses pitch, energy, and durational features of speech. Prosody is thought to convey emphasis, sentiment, and emotion, all of which are presumably correlated with expressions of involvement. We used the openSMILE toolkit BIBREF0 to compute 988 features as defined by the emobase988 configuration file, operating on the close-talking meeting recordings. This feature set consists of low-level descriptors such as intensity, loudness, Mel-frequency cepstral coefficients, and pitch. For each low-level descriptor, functionals such as max/min value, mean, standard deviation, kurtosis, and skewness are computed. Finally, global mean and variance normalization are applied to each feature, using training set statistics. The feature vector thus captures acoustic-prosodic features aggregated over what are typically utterances. We tried extracting openSMILE features directly from 60 s windows, but found better results by extracting subwindows of 5 s, followed by pooling over the longer 60 s duration. We attribute this to the fact that emobase features are designed to operate on individual utterances, which have durations closer to 5 s than 60 s.
<<</Acoustic-Prosodic Features>>>
<<<Word-Based Features>>>
<<<Bag of words with TF-IDF>>>
Initially, we investigated a simple bag-of-words model including all unigrams, bigrams, and trigrams found in the training set. Occurrences of the top 10,000 n-grams were encoded to form a 10,000-dimensional vector, with values weighted according to TD-IDF. TF-IDF weights n-grams according to both their frequency (TF) and their salience (inverse document frequency, IDF) in the data, where each utterance was treated as a separate document. The resulting feature vectors are very sparse.
<<</Bag of words with TF-IDF>>>
<<<Embeddings>>>
The ICSI dataset is too small to train a neural embedding model from scratch. Therefore, it is convenient to use the pre-trained BERT embedding architecture BIBREF1 to create an utterance-level embedding vector for each region of interest. Having been trained on a large text corpus, the resulting embeddings encode semantic similarities among utterances, and would enable generalization from word patterns seen in the ICSI training data to those that have not been observed on that limited corpus.
We had previously also created an adapted version of the BERT model, tuned to to perform utterance-level sentiment classification, on a separate dataset BIBREF14. As proposed in BIBREF1, we fine-tuned all layers of the pre-trained BERT model by adding a single fully-connected layer and classifying using only the embedding corresponding to the classification ([CLS]) token prepended to each utterance. The difference in UAR between the hot spot classifiers using the pre-trained embeddings and those using the sentiment-adapted embeddings is small. Since the classifier using embeddings extracted by the sentiment-adapted model yielded slightly better performance, we report all results using these as input.
To obtain a single embedding for each 60 s window, we experimented with various approaches of pooling the token and utterance-level embeddings. For our first approach, we ignored the ground-truth utterance segmentation and speaker information. We merged all words spoken within a particular window into a single contiguous span. Following BIBREF1, we added the appropriate classification and separation tokens to the text and selected the embedding corresponding to the [CLS] token as the window-level embedding. Our second approach used the ground-truth segmentation of the dialogue. Each speaker turn was independently modeled, and utterance-level embeddings were extracted using the representation corresponding to the [CLS] token. Utterances that cross window boundaries are truncated using the word timestamps, so only words spoken within the given time window are considered. For all reported experiments, we use L2-norm pooling to form the window-level embeddings for the final classifier, as this performed better than either mean or max pooling.
<<</Embeddings>>>
<<</Word-Based Features>>>
<<<Speaker Activity Features>>>
These features were a compilation of three different feature types:
Speaker overlap percentages: Based on the available word-level times, we computed a 6-dimensional feature vector, where the $i$th index indicates the fraction of time that $i$ or more speakers are talking within a given window. This can be expressed by $\frac{t_i}{60}$ with $t_i$ indicating the time in seconds that $i$ or more people were speaking at the same time.
Unique speaker count: Counts the unique speakers within a window, as a useful metric to track the diversity of participation within a certain window.
Turn switch count: Counts the number of times a speaker begins talking within a window. This is a similar metric to the number of utterances. However, unlike utterance count, turn switches can be computed entirely from speech activity, without requiring a linguistic segmentation.
<<</Speaker Activity Features>>>
<<<Laughter Count>>>
Laskowski found that laughter is highly predictive of involvement in the ICSI data. Laughter is annotated on an utterance level and falls into two categories: laughter solely on its own (no words) or laughter contained within an utterance (i.e. during speech). The feature is a simple tally of the number of times people laughed within a window. We include it in some of our experiments for comparison purposes, though we do not trust it as general feature. (The participants in the ICSI meetings are far too familiar and at ease with each other to be representative with regards to laughter.)
<<</Laughter Count>>>
<<</Feature Description>>>
<<<Modeling>>>
<<<Non-Neural Models>>>
In preliminary experiments, we compared several non-neural classifiers, including logistic regression (LR), random forests, linear support vector machines, and multinomial naive Bayes. Logistic regression gave the best results all around, and we used it exclusively for the results shown here, unless neural networks are used instead.
<<</Non-Neural Models>>>
<<<Feed-Forward Neural Networks>>>
<<<Pooling Techniques>>>
For BERT and openSMILE vector classification, we designed two different feed-forward neural network architectures. The sentiment-adapted embeddings described in Section SECREF3 produce one 1024-dimensional vector per utterance. Since all classification operates on time windows, we had to pool over all utterances falling withing a given window, taking care to truncate words falling outside the window. We tested four pooling methods: L2-norm, mean, max, and min, with L2-norm giving the best results.
As for the prosodic model, each vector extracted from openSMILE represents a 5 s interval. Since there was both a channel/speaker-axis and a time-axis, we needed to pool over both dimensions in order to have a single vector representing the prosodic features of a 60 s window. The second to last layer is the pooling layer, max-pooling across all the channels, and then mean-pooling over time. The output of the pooling layer is directly fed into the classifier.
<<</Pooling Techniques>>>
<<<Hyperparameters>>>
The hyperparameters of the neural networks (hidden layer number and sizes) were also tuned in preliminary experiments. Details are given in Section SECREF5.
<<</Hyperparameters>>>
<<</Feed-Forward Neural Networks>>>
<<<Model Fusion>>>
Fig. FIGREF19 depicts the way features from multiple categories are combined. Speech activity and word features are fed directly into a final LR step. Acoustic-prosodic features are first combined in a feed-forward neural classifier, whose output log posteriors are in turn fed into the LR step for fusion. (When using only prosodic features, the ANN outputs are used directly.)
<<</Model Fusion>>>
<<</Modeling>>>
<<<Experiments>>>
We group experiments by the type of feaures they are based on: acoustic-prosodic, word-based, and speech activity, evaluating each group first by itself, and then in combination with others.
<<<Speech Feature Results>>>
As discussed in Section SECREF3, a multitude of input features were investigated, with some being more discriminative. The most useful speech activity features were speaker overlap percentage, number of unique speakers, and number of turn switches, giving evaluation set UARs of 63.5%, 63.9%, and 66.6%, respectively. When combined the UAR improved to 68.0%, showing that these features are partly complementary.
<<</Speech Feature Results>>>
<<<Word-Based Results>>>
The TF-IDF model alone gave a UAR of 59.8%. A drastic increase in performance to 70.5% was found when using the BERT embeddings instead. Therefore we adopted embeddings for all further experiments based on word information.
Three different types of embeddings were investigated, i.e. sentiment-adapted embeddings at an utterance-level, unadapted embeddings at the utterance-level, and unadapted embeddings over time windows.
The adapted embeddings (on utterances) performed best, indicating that adaptation to sentiment task is useful for involvement classification. It is important to note, however, that the utterance-level embeddings are larger than the window-level embeddings. This is due to there being more utterances than windows in the meeting corpus.
The best neural architecture we found for these embeddings is a 5-layer neural network with sizes 1024-64-32-12-2. Other hyperparameters for this model are dropout rate = 0.4, learning rate = $10^{-7}$ and activation function “tanh”. The UAR on the evaluation set with just BERT embeddings as input is 65.2%.
Interestingly, the neural model was outperformed by a LR directly on the embedding vectors. Perhaps the neural network requires further fine-tuning, or the neural model is too prone to overfitting, given the small training corpus. In any case, we use LR on embeddings for all subsequent results.
<<</Word-Based Results>>>
<<<Acoustic-Prosodic Feature Results>>>
Our prosodic model is a 5-layer ANN, as described in Section SECREF15. The architecture is: 988-512-128-16-Pool-2. The hyperparameters are: dropout rate = 0.4, learning rate = $10^{-7}$, activation = “tanh". The UAR on the evaluation set with just openSMILE features is 62.0%.
<<</Acoustic-Prosodic Feature Results>>>
<<<Fusion Results and Discussion>>>
Table TABREF24 gives the UAR for each feature subset individually, for all features combined, and for a combination in which one feature subset in turn is left out. The one-feature-set-at-time results suggest that prosody, speech activity and words are of increasing importance in that order. The leave-one-out analysis agrees that the words are the most important (largest drop in accuracy when removed), but on that criterion the prosodic features are more important than speech-activity. The combination of all features is 0.4% absolute better than any other subset, showing that all feature subsets are partly complementary.
Fig. FIGREF25 shows the same results in histogram form, but also add those with laughter information. Laughter count by itself is the strongest cue to involvement, as Laskowski BIBREF7 had found. However, even given the strong individual laughter feature, the other features add information, pushing the UAR from from 75.1% to 77.5%.
<<</Fusion Results and Discussion>>>
<<</Experiments>>>
<<<Conclusion>>>
We studied detection of areas of high involvement, or “hot spots”, within meetings using the ICSI corpus. The features that yielded the best results are in line with our intuitions. Word embeddings, speech activity features such a number of turn changes, and prosodic features are all plausible indicators of high involvement. Furthermore, the feature sets are partly complementary and yield best results when combined using a simple logistic regression model. The combined model achieves 72.6% UAR, or 77.5% with laughter feature.
For future work, we would want to see a validation on an independent meeting collection, such as business meetings. Some features, in particular laughter, are bound not be as useful in this case. More data could also enable the training of joint models that perform an early fusion of the different feature types. Also, the present study still relied on human transcripts, and it would be important to know how much UAR suffers with a realistic amount of speech recognition error. Transcription errors are expected to boost the importance of the features types that do not rely on words.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Introduction and Prior Work, Feature Description"
],
"type": "disordered_section"
}
|
1909.08103
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Simultaneous Speech Recognition and Speaker Diarization for Monaural Dialogue Recordings with Target-Speaker Acoustic Models
<<<Abstract>>>
This paper investigates the use of target-speaker automatic speech recognition (TS-ASR) for simultaneous speech recognition and speaker diarization of single-channel dialogue recordings. TS-ASR is a technique to automatically extract and recognize only the speech of a target speaker given a short sample utterance of that speaker. One obvious drawback of TS-ASR is that it cannot be used when the speakers in the recordings are unknown because it requires a sample of the target speakers in advance of decoding. To remove this limitation, we propose an iterative method, in which (i) the estimation of speaker embeddings and (ii) TS-ASR based on the estimated speaker embeddings are alternately executed. We evaluated the proposed method by using very challenging dialogue recordings in which the speaker overlap ratio was over 20%. We confirmed that the proposed method significantly reduced both the word error rate (WER) and diarization error rate (DER). Our proposed method combined with i-vector speaker embeddings ultimately achieved a WER that differed by only 2.1 % from that of TS-ASR given oracle speaker embeddings. Furthermore, our method can solve speaker diarization simultaneously as a by-product and achieved better DER than that of the conventional clustering-based speaker diarization method based on i-vector.
<<</Abstract>>>
<<<Introduction>>>
Our main goal is to develop a monaural conversation transcription system that can not only perform automatic speech recognition (ASR) of multiple talkers but also determine who spoke the utterance when, known as speaker diarization BIBREF0, BIBREF1. For both ASR and speaker diarization, the main difficulty comes from speaker overlaps. For example, a speaker-overlap ratio of about 15% was reported in real meeting recordings BIBREF2. For such overlapped speech, neither conventional ASR nor speaker diarization provides a result with sufficient accuracy. It is known that mixing two speech significantly degrades ASR accuracy BIBREF3, BIBREF4, BIBREF5. In addition, no speaker overlaps are assumed with most conventional speaker diarization techniques, such as clustering of speech partitions (e.g. BIBREF0, BIBREF6, BIBREF7, BIBREF8, BIBREF9), which works only if there are no speaker overlaps. Due to these difficulties, it is still very challenging to perform ASR and speaker diarization for monaural recordings of conversation.
One solution to the speaker-overlap problem is applying a speech-separation method such as deep clustering BIBREF10 or deep attractor network BIBREF11. However, a major drawback of such a method is that the training criteria for speech separation do not necessarily maximize the accuracy of the final target tasks. For example, if the goal is ASR, it will be better to use training criteria that directly maximize ASR accuracy.
In one line of research using ASR-based training criteria, multi-speaker ASR based on permutation invariant training (PIT) has been proposed BIBREF3, BIBREF12, BIBREF13, BIBREF14, BIBREF15. With PIT, the label-permutation problem is solved by considering all possible permutations when calculating the loss function BIBREF16. PIT was first proposed for speech separation BIBREF16 and soon extended to ASR loss with promising results BIBREF3, BIBREF12, BIBREF13, BIBREF14, BIBREF15. However, a PIT-ASR model produces transcriptions for each utterance of speakers in an unordered manner, and it is no longer straightforward to solve speaker permutations across utterances. To make things worse, a PIT model trained with ASR-based loss normally does not produce separated speech waveforms, which makes speaker tracing more difficult.
In another line of research, target-speaker (TS) ASR, which automatically extracts and transcribes only the target speaker's utterances given a short sample of that speaker's speech, has been proposed BIBREF17, BIBREF4. Žmolíková et al. proposed a target-speaker neural beamformer that extracts a target speaker's utterances given a short sample of that speaker's speech BIBREF17. This model was recently extended to handle ASR-based loss to maximize ASR accuracy with promising results BIBREF4. TS-ASR can naturally solve the speaker-permutation problem across utterances. Importantly, if we can execute TS-ASR for each speaker correctly, speaker diarization is solved at the same time just by extracting the start and end time information of the TS-ASR result. However, one obvious drawback of TS-ASR is that it cannot be applied when the speakers in the recordings are unknown because it requires a sample of the target speakers in advance of decoding.
Based on this background, we propose a speech recognition and speaker diarization method that is based on TS-ASR but can be applied without knowing the speaker information in advance. To remove the limitation of TS-ASR, we propose an iterative method, in which (i) the estimation of target-speaker embeddings and (ii) TS-ASR based on the estimated embeddings are alternately executed. As an initial trial, we evaluated the proposed method by using real dialogue recordings in the Corpus of Spontaneous Japanese (CSJ). Although it contains the speech of only two speakers, the speaker-overlap ratio of the dialogue speech is very high; 20.1% . Thus, this is very challenging even for state-of-the-art ASR and speaker diarization. We show that the proposed method effectively reduced both word error rate (WER) and diarizaton error rate (DER).
<<</Introduction>>>
<<<Simultaneous ASR and Speaker Diarization>>>
In this section, we first explain the problem we targeted then the proposed method with reference to Figure FIGREF1.
<<<Problem statement>>>
The overview of the problem is shown in Figure FIGREF1 (left). We assume a sequence of observations $\mathcal {X}=\lbrace {\bf X}_1,...,{\bf X}_U\rbrace $, where $U$ is the number of observations, and ${\bf X}_u$ is the $u$-th observation consisting of a sequence of acoustic features. Such a sequence is naturally generated when we separate a long recording into small segments based on voice activity detection which is a basic preprocess for ASR so as not to generate overly large lattices. We also assume a tuple of word hypotheses ${\bf W}_u=(W_{1,u},...,W_{J,u})$ for an observation ${\bf X}_u$ where $J$ is the number of speakers, and $W_{j,u}$ represents the speech-recognition hypothesis of the $j$-th speaker given observation ${\bf X}_u$. We assume $W_{j,u}$ contains not only word sequences but also their corresponding frame-level time alignments of phonemes and silences. Finally, we assume a tuple of speaker embeddings $\mathcal {E}=(e_1, ..., e_J)$, where $e_j\in \mathbb {R}^d$ represents the $d$-dim speaker embedding of the $j$-th speaker.
Then, our objective is to find the best possible $\mathcal {W}=\lbrace {\bf W}_1,...,{\bf W}_U\rbrace $ given a sequence of observations $\mathcal {X}$ as follows.
Here, the starting point is the conventional maximum a posteriori-based decoding given $\mathcal {X}$ but for multiple speakers. We then introduce the speaker embeddings $\mathcal {E}$ as a hidden variable (Eq. ). Finally, we approximate the summation by using a max operation (Eq. ).
Our motivation to introduce $\mathcal {E}$, which is constant across all observation indices $u$, is to explicitly enforce the order of speakers in $\mathcal {W}$ to be constant over indices $u$. It should be emphasized that if we can solve the problem, speaker diarization is solved at the same time just by extracting the start and end time information of each hypothesis in $\mathcal {W}$. Also note that there are $J!$ possible solutions by swapping the order of speakers in $\mathcal {E}$, and it is sufficient to find just one such solution.
<<</Problem statement>>>
<<<Iterative maximization>>>
It is not easy to directly solve $P(\mathcal {W},\mathcal {E}|\mathcal {X})$, so we propose to alternately maximize $\mathcal {W}$ and $\mathcal {E}$. Namely, we first fix $\underline{\mathcal {W}}$ and find $\mathcal {E}$ that maximizes $P(\underline{\mathcal {W}},\mathcal {E}|\mathcal {X})$. We then fix $\underline{\mathcal {E}}$ and find $\mathcal {W}$ that maximizes $P(\mathcal {W},\underline{\mathcal {E}}|\mathcal {X})$. By iterating this procedure, $P(\mathcal {W},\mathcal {E}|\mathcal {X})$ can be increased monotonically. Note that it can be said by a simple application of the chain rule that finding $\mathcal {E}$ that maximizes $P(\underline{\mathcal {W}},\mathcal {E}|\mathcal {X})$ with a fixed $\underline{\mathcal {W}}$ is equivalent to finding $\mathcal {E}$ that maximizes $P(\mathcal {E}|\underline{\mathcal {W}},\mathcal {X})$. The same thing can be said for the estimation of $\mathcal {W}$ with a fixed $\underline{\mathcal {E}}$.
For the $(i)$-th iteration of the maximization ($i\in \mathbb {Z}^{\ge 0}$), we first find the most plausible estimation of $\mathcal {E}$ given the $(i-1)$-th speech-recognition hypothesis $\tilde{\mathcal {W}}^{(i-1)}$ as follows.
Here, the estimation of $\tilde{\mathcal {E}}^{(i)}$ is dependent on $\tilde{\mathcal {W}}^{(i-1)}$ for $i \ge 1$. Assume that the overlapped speech corresponds to a “third person” who is different from any person in the recording, Eq. DISPLAY_FORM5 can be achieved by estimating the speaker embeddings only from non-overlapped regions (upper part of Figure FIGREF1 (right)). In this study, we used i-vector BIBREF18 as the representation of speaker embeddings, and estimated i-vector based only on the non-overlapped region given $\tilde{\mathcal {W}}^{(i-1)}$ for each speaker. Note that, since we do not have an estimation of $\mathcal {W}$ for the first iteration, $\tilde{\mathcal {E}}^{(0)}$ is initialized only by $\mathcal {X}$. In this study, we estimated the i-vector for each speaker given the speech region that was estimated by the clustering-based speaker diarization method. More precicely, we estimated the i-vector for each ${\bf X}_u$ then applied $J$-cluster K-means clustering. The center of each cluster was used for the initial speaker embeddings $\tilde{\mathcal {E}}^{(0)}$.
We then update $\mathcal {W}$ given speaker embeddings $\tilde{\mathcal {E}}^{(i)}$.
Here, we estimate the most plausible hypotheses $\mathcal {W}$ given estimated embeddings $\tilde{\mathcal {E}}^{(i)}$ and observation $\mathcal {X}$ (Eq. DISPLAY_FORM8). We then assume the conditional independence of ${\bf W}_u$ given ${\bf X}_u$ for each segment $u$ (Eq. ). Finally, we further assume the conditional independence of $W_{j,u}$ given $\tilde{e}_j^{(i)}$ for each speaker $j$ (Eq. ). The final equation can be solved by applying TS-ASR for each segment $u$ for each speaker $j$ (lower part of Figure FIGREF1 (right)). We will review the detail of TS-ASR in the next section.
<<</Iterative maximization>>>
<<</Simultaneous ASR and Speaker Diarization>>>
<<<TS-ASR: Review>>>
<<<Overview of TS-ASR>>>
TS-ASR is a technique to extract and recognize only the speech of a target speaker given a short sample utterance of that speaker BIBREF17, BIBREF21, BIBREF4. Originally, the sample utterance was fed into a special neural network that outputs an averaged embedding to control the weighting of speaker-dependent blocks of the acoustic model (AM). However, to make the problem simpler, we assume that a $d$-dimensional speaker embedding $e_{\rm tgt}\in \mathbb {R}^d$ is extracted from the sample utterance. In this context, TS-ASR can be expressed as the problem to find the best hypothesis $W_{\rm tgt}$ given observation ${\bf X}$ and speaker embedding $e_{\rm tgt}$ as follows.
If we have a well-trained TS-ASR, Eq. can be solved by simply applying the TS-ASR for each segment $u$ for each speaker $j$.
<<</Overview of TS-ASR>>>
<<<TS-AM with auxiliary output network>>>
<<<Overview>>>
Although any speech recognition architecture can be used for TS-ASR, we adopted a variant of the TS-AM that was recently proposed and has promising accuracy BIBREF5. Figure FIGREF13 describes the TS-AM that we applied for this study. This model has two input branches. One branch accepts acoustic features ${\bf X}$ as a normal AM while the other branch accepts an embedding $e_{\rm tgt}$ that represents the characteristics of the target speaker. In this study, we used a log Mel-filterbank (FBANK) and i-vector BIBREF18, BIBREF22 for the acoustic features and target-speaker embedding, respectively.
A unique component of the model is in its output branch. The model has multiple output branches that produce outputs ${\bf Y}^{\rm tgt}$ and ${\bf Y}^{\rm int}$ for the loss functions for the target and interference speakers, respectively. The loss for the target speaker is defined to maximize the target-speaker ASR accuracy, while the loss for interference speakers is defined to maximize the interference-speaker ASR accuracy. We used lattice-free maximum mutual information (LF-MMI) BIBREF23 for both criteria.
The original motivation of the output branch for interference speakers was the improvement of TS-ASR by achieving a better representation for speaker separation in the shared layers. However, it was also shown that the output branch for interference speakers can be used for the secondary ASR for interference speakers given the embedding of the target speaker BIBREF5. In this paper, we found out that the latter property worked very well for the ASR for dialogue recordings, which will be explained in the evaluation section.
The network is trained with a mixture of multi-speaker speech given their transcriptions. We assume that, for each training sample, (a) transcriptions of at least two speakers are given, (b) the transcription for the target speaker is marked so that we can identify the target speaker's transcription, and (c) a sample for the target speaker can be used to extract speaker embeddings. These assumptions can be easily satisfied by artificially generating training data by mixing the speech of multiple speakers.
<<</Overview>>>
<<<Loss function>>>
The main loss function for the target speaker is defined as
where $u$ corresponds to the index of training samples in this case. The term $\mathcal {G}^{\rm tgt}_u$ indicates a numerator (or reference) graph that represents a set of possible correct state sequences for the utterance of the target speaker of the $u$-th training sample, ${\bf S}$ denotes a hypothesis state sequence for the $u$-th training sample, and $\mathcal {G}^{D}$ denotes a denominator graph, which represents a possible hypothesis space and normally consists of a 4-gram phone language model in LF-MMI training BIBREF23.
The auxiliary interference speaker loss is then defined to maximize the interference-speaker ASR accuracy, which we expect to enhance the speaker separation ability of the neural network. This loss is defined as
where $\mathcal {G}^{\rm int}_u$ denotes a numerator (or reference) graph that represents a set of possible correct state sequences for the utterance of the interference speaker of the $u$-th training sample.
Finally, the loss function $\mathcal {F}^{\rm comb}$ for training is defined as the combination of the target and interference losses,
where $\alpha $ is the scaling factor for the auxiliary loss. In our evaluation, we set $\alpha =1.0$. Setting $\alpha =0.0$, however, corresponds to normal TS-ASR.
<<</Loss function>>>
<<</TS-AM with auxiliary output network>>>
<<</TS-ASR: Review>>>
<<<Evaluation>>>
<<<Experimental settings>>>
<<<Main evaluation data: real dialogue recordings>>>
We conducted our experiments on the CSJ BIBREF25, which is one of the most widely used evaluation sets for Japanese speech recognition. The CSJ consists of more than 600 hrs of Japanese recordings.
While most of the content is lecture recordings by a single speaker, CSJ also contains 11.5 hrs of 54 dialogue recordings (average 12.8 min per recording) with two speakers, which were the main target of ASR and speaker diarization in this study. During the dialogue recordings, two speakers sat in two adjacent sound proof chambers divided by a glass window. They could talk with each other over voice connection through a headset for each speaker. Therefore, speech was recorded separately for each speaker, and we generated mixed monaural recordings by mixing the corresponding speeches of two speakers. When mixing two recordings, we did not apply any normalization of speech volume. Due to this recording procedure, we were able to use non-overlapped speech to evaluate the oracle WERs.
It should be noted that, although the dialogue consisted of only two speakers, the speaker overlap ratio of the recordings was very high due to many backchannels and natural turn-taking. Among all recordings, 16.7% of the region was overlapped speech while 66.4% was spoken by a single speaker. The remaining 16.9% was silence. Therefore, 20.1% (=16.7/(16.7+66.4)) of speech regions was speaker overlap. From the viewpoint of ASR, 33.5% (= (16.7*2)/(16.7*2+66.4)) of the total duration to be recognized was overlapped. These values were even higher than those reported for meetings with more than two speakers BIBREF26, BIBREF2. Therefore, these dialogue recordings are very challenging for both ASR and speaker diarization. We observed significantly high WER and DER, which is discussed in the next section.
<<</Main evaluation data: real dialogue recordings>>>
<<<Sub evaluation data: simulated 2-speaker mixture>>>
To evaluate TS-ASR, we also used the simulated 2-speaker-mixed data by mixing the three official single-speaker evaluation sets of CSJ, i.e., E1, E2, and E3 BIBREF27. Each set includes different groups of 10 lectures (5.6 hrs, 30 lectures in total). The E1 set consists of 10 lectures of 10 male speakers, and E2 and E3 each consists of 10 lectures of 5 female and 5 male speakers. We generate two-speaker mixed speech by adding randomly selected speech (= interference-speaker speech) to the original speech (= target-speaker speech) with the constraint that the target and interference speakers were different, and each interference speaker was selected only once from the dataset. When we mixed the two speeches, we configured them to have the same power level, and shorter speech was mixed with the longer speech from a random starting point selected to ensure the end point of the shorter one did not exceed that of the longer one.
<<</Sub evaluation data: simulated 2-speaker mixture>>>
<<<Training data and training settings>>>
The rest of the 571 hrs of 3,207 lecture recordings (excluding the same speaker's lectures in the evaluation sets) were used for AM and language model (LM) training. We generated two-speaker mixed speech for training data in accordance with the following protocol.
Prepare a list of speech samples (= main list).
Shuffle the main list to create a second list under the constraint that the same speaker does not appear in the same line in the main and second lists.
Mix the audio in the main and second lists one-by-one with a specific signal-to-interference ratio (SIR). For training data, we randomly sampled an SIR as follows.
In 1/3 probability, sample the SIR from a uniform distribution between -10 and 10 dB.
In 1/3 probability, sample the SIR from a uniform distribution between 10 and 60 dB. The transcription of the interference speaker was set to null.
In 1/3 probability, sample the SIR from a uniform distribution between -60 and -10 dB. The transcription of the target speaker was set to null.
The volume of each mixed speech was randomly changed to enhance robustness against volume difference.
A speech for extracting a speaker embedding was also randomly selected for each speech mixture from the main list. Note that the random perturbation of volume was applied only for the training data, not for evaluation data.
We trained a TS-AM consisting of a convolutional neural network (CNN), time-delay NN (TDNN) BIBREF28, and long short-term memory (LSTM) BIBREF29, as shown in fig:ts-am. The input acoustic feature for the network was a 40-dimensional FBANK without normalization. A 100-dimensional i-vector was also extracted and used for the target-speaker embedding to indicate the target speaker. For extracting this i-vector, we randomly selected an utterance of the same speaker. We conducted 8 epochs of training on the basis of LF-MMI, where the initial learning rate was set to 0.001 and exponentially decayed to 0.0001 by the end of the training. We applied $l2$-regularization and CE-regularization BIBREF23 with scales of 0.00005 and 0.1, respectively. The leaky hidden Markov model coefficient was set to 0.1. A backstitch technique BIBREF30 with a backstitch scale of 1.0 and backstitch interval of 4 was also used.
For comparison, we trained another TS-AM without the auxiliary loss. We also trained a “clean AM” using clean, non-speaker-mixed speech. For this clean model, we used a model architecture without the auxiliary output branch, and an i-vector was extracted every 100 msec for online speaker/environment adaptation.
In decoding, we used a 4-gram LM trained using the transcription of the training data. All our experiments were conducted on the basis of the Kaldi toolkit BIBREF31.
<<</Training data and training settings>>>
<<</Experimental settings>>>
<<<Preliminary experiment with simulated 2-speaker mixture>>>
<<<Evaluation of TS-ASR>>>
We first evaluated the TS-AM with two-speaker mixture of the E1, E2, and E3 evaluation sets. For each test utterance, a sample of the target speaker was randomly selected from the other utterances in the test set. We used the same random seed over all experiments, so that they could be conducted under the same conditions.
The results are listed in Table TABREF32. Although the clean AM produced a WER of 7.90% for the original clean dataset, the WER severely degraded to 88.03% by mixing two speakers. The TS-AM then significantly recovered the WER to 20.78% ($\alpha =0.0$). Although the improvement was not so significant compared with that reported in BIBREF5, the auxiliary loss further improved the WER to 20.53% ($\alpha =1.0$). Note that E1 contains only male speakers while E2 and E3 contain both female and male speakers. Because of this, E1 showed larger degradation of WER when 2 speakers were mixed.
<<</Evaluation of TS-ASR>>>
<<</Preliminary experiment with simulated 2-speaker mixture>>>
<<</Evaluation>>>
<<</Title>>>
|
{
"references": [
"Introduction, Simultaneous ASR and Speaker Diarization"
],
"type": "disordered_section"
}
|
1911.08829
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Casting a Wide Net: Robust Extraction of Potentially Idiomatic Expressions
<<<Abstract>>>
Idiomatic expressions like `out of the woods' and `up the ante' present a range of difficulties for natural language processing applications. We present work on the annotation and extraction of what we term potentially idiomatic expressions (PIEs), a subclass of multiword expressions covering both literal and non-literal uses of idiomatic expressions. Existing corpora of PIEs are small and have limited coverage of different PIE types, which hampers research. To further progress on the extraction and disambiguation of potentially idiomatic expressions, larger corpora of PIEs are required. In addition, larger corpora are a potential source for valuable linguistic insights into idiomatic expressions and their variability. We propose automatic tools to facilitate the building of larger PIE corpora, by investigating the feasibility of using dictionary-based extraction of PIEs as a pre-extraction tool for English. We do this by assessing the reliability and coverage of idiom dictionaries, the annotation of a PIE corpus, and the automatic extraction of PIEs from a large corpus. Results show that combinations of dictionaries are a reliable source of idiomatic expressions, that PIEs can be annotated with a high reliability (0.74-0.91 Fleiss' Kappa), and that parse-based PIE extraction yields highly accurate performance (88% F1-score). Combining complementary PIE extraction methods increases reliability further, to over 92% F1-score. Moreover, the extraction method presented here could be extended to other types of multiword expressions and to other languages, given that sufficient NLP tools are available.
<<</Abstract>>>
<<<Introduction>>>
Idiomatic expressions pose a major challenge for a wide range of applications in natural language processing BIBREF0. These include machine translation BIBREF1, BIBREF2, semantic parsing BIBREF3, sentiment analysis BIBREF4, and word sense disambiguation BIBREF5. Idioms show significant syntactic and morphological variability (e.g. beans being spilled for spill the beans), which makes them hard to find automatically. Moreover, their non-compositional nature makes idioms really hard to interpret, because their meaning is often very different from the meanings of the words that make them up. Hence, successful systems need not only be able to recognise idiomatic expressions in text or dialogue, but they also need to give a proper interpretation to them. As a matter of fact, current language technology performs badly on idiom understanding, a phenomenon that perhaps has not received enough attention.
Nearly all current language technology used in NLP applications is based on supervised machine learning. This requires large amounts of labelled data. In the case of idiom interpretation, however, only small datasets are available. These contain just a couple of thousand idiom instances, covering only about fifty different types of idiomatic expressions. In fact, existing annotated corpora tend to cover only a small set of idiom types, comprising just a few syntactic patterns (e.g., verb-object combinations), of which a limited number of instances are extracted from a large corpus.
This is not surprising as preparing and compiling such corpora involves a large amount of manual extraction work, especially if one wants to allow for form variation in the idiomatic expressions (for example, extracting cooking all the books for cook the books). This work involves both the crafting of syntactic patterns to match potential idiomatic expressions and the filtering of false extractions (non-instances of the target expression e.g. due to wrong parses), and increases with the amount of idiom types included in the corpus (which, in the worst case, means an exponential increase in false extractions). Thus, building a large corpus of idioms, especially one that covers many types in many syntactic constructions, is costly. If a high-precision, high-recall system can be developed for the task of extracting the annotation candidates, this cost will be greatly reduced, making the construction of a large corpus much more feasible.
The variability of idioms has been a significant topic of interest among researchers of idioms. For example, BIBREF6 investigates the internal and external modification of a set of idioms in a large English corpus, whereas BIBREF7, quantifies and classifies the variation of a set of idioms in a large corpus of Dutch, setting up a useful taxonomy of variation types. Both find that, although idiomatic expressions mainly occur in their dictionary form, there is a significant minority of idiom instances that occur in non-dictionary variants. Additionally, BIBREF8 show that idiom variants retain their idiomatic meaning more often and are processed more easily than previously assumed. This emphasises the need for corpora covering idiomatic expressions to include these variants, and for tools to be robust in dealing with them.
As such, the aim of this article is to describe methods and provide tools for constructing larger corpora annotated with a wider range of idiom types than currently in existence due to the reduced amount of manual labour required. In this way we hope to stimulate further research in this area. In contrast to previous approaches, we want to catch as many idiomatic expressions as possible, and we achieve this by casting a wide net, that is, we consider the widest range of possible idiom variants first and then filter out any bycatch in a way that requires the least manual effort.
We expect research will benefit from having larger corpora by improving evaluation quality, by allowing for the training of better supervised systems, and by providing additional linguistic insight into idiomatic expressions. A reliable method for extracting idiomatic expressions is not only needed for building an annotated corpus, but can also be used as part of an automatic idiom processing pipeline. In such a pipeline, extracting potentially idiomatic expressions can be seen as a first step before idiom disambiguation, and the combination of the two modules then functions as an complete idiom extraction system.
The main research question that we aim to answer in this article is whether dictionary-based extraction of potentially idiomatic expressions is robust and reliable enough to facilitate the creation of wide-coverage sense-annotated idiom corpora.
By answering this question we make several contributions to research on multiword expressions, in particular that of idiom extraction. Firstly, we provide an overview of existing research on annotating idiomatic expressions in corpora, showing that current corpora cover only small sets of idiomatic types (Section SECREF3). Secondly, we quantify the coverage and reliability of a set of idiom dictionaries, demonstrating that there is little overlap between resources (Section SECREF4). Thirdly, we develop and release an evaluation corpus for extracting potentially idiomatic expressions from text (Section SECREF5). Finally, various extraction systems and combinations thereof are implemented, made available to the research community, and evaluated empirically (Section SECREF6).
<<</Introduction>>>
<<<New Terminology: Potentially Idiomatic Expression (PIE)>>>
The ambiguity of phrases like wake up and smell the coffee poses a terminological problem. Usually, these phrases are called idiomatic expressions, which is suitable when they are used in an idiomatic sense, but not so much when they are used in a literal sense. Therefore, we propose a new term: potentially idiomatic expressions, or PIEs for short. The term potentially idiomatic expression refers to those expressions which can have an idiomatic meaning, regardless of whether they actually have that meaning in a given context. So, see the light is a PIE in both `After another explanation, I finally saw the light' and `I saw the light of the sun through the trees', while it is an idiomatic expression in the first context, and a literal phrase in the latter context.
The processing of PIEs involves three main challenges: the discovery of (new) PIE types, the extraction of instances of known PIE types in text, and the disambiguation of PIE instances in context. Here, we propose calling the discovery task simply PIE discovery, the extraction task simply PIE extraction, and the disambiguation task PIE disambiguation. Note that these terms contrast with the terms used in existing research. There, the discovery task is called type-based idiom detection and the disambiguation task is called token-based idiom detection (cf. BIBREF10, BIBREF11), although this usage is not always consistent. Because these terms are very similar, they are potentially confusing, and that is why we propose novel terminology.
Other terminology comes from literature on multiword expressions (MWEs) more generally, i.e. not specific to idioms. Here, the task of finding new MWE types is called MWE discovery and finding instances of known MWE types is called MWE identification BIBREF12. Note, however, that MWE identification generally consists of finding only the idiomatic usages of these types (e.g. BIBREF13). This means that MWE identification consists of both the extraction and disambiguation tasks, performed jointly. In this work, we propose to split this into two separate tasks, and we are concerned only with the PIE extraction part, leaving PIE disambiguation as a separate problem.
<<</New Terminology: Potentially Idiomatic Expression (PIE)>>>
<<<Related Work>>>
This section is structured so as to reflect the dual contribution of the present work. First, we discuss existing resources annotated for idiomatic expressions. Second, we discuss existing approaches to the automatic extraction of idioms.
<<<Annotated Corpora and Annotation Schemes for Idioms>>>
There are four sizeable sense-annotated PIE corpora for English: the VNC-Tokens Dataset BIBREF9, the Gigaword dataset BIBREF14, the IDIX Corpus BIBREF10, and the SemEval-2013 Task 5 dataset BIBREF15. An overview of these corpora is presented in Table TABREF7.
<<<VNC-Tokens>>>
The VNC-Tokens dataset contains 53 different PIE types. BIBREF9 extract up to 100 instances from the British National Corpus for each type, for a total of 2,984 instances. These types are based on a pre-existing list of verb-noun combinations and were filtered for frequency and whether two idiom dictionaries both listed them. Instances were extracted automatically, by parsing the corpus and selecting all sentences with the right verb and noun in a direct-object relation. It is unclear whether the extracted sentences were manually checked, but no false extractions are mentioned in the paper or present in the dataset.
All extracted PIE instances were annotated for sense as either idiomatic, literal or unclear. This is a self-explanatory annotation scheme, but BIBREF9 note that senses are not binary, but can form a continuum. For example, the idiomaticity of have a word in `You have my word' is different from both the literal sense in `The French have a word for this' and the figurative sense in `My manager asked to have a word'. They instructed annotators to choose idiomatic or literal even in ambiguous middle-of-the-continuum cases, and restrict the unclear label only to cases where there is not enough context to disambiguate the meaning of the PIE.
<<</VNC-Tokens>>>
<<<Gigaword>>>
BIBREF14 present a corpus of 17 PIE types, for which they extracted all instances from the Gigaword corpus BIBREF18, yielding a total of 3,964 instances. BIBREF14 extracted these instances semi-automatically by manually defining all inflectional variants of the verb in the PIE and matching these in the corpus. They did not allow for inflectional variations in non-verb words, nor did they allow intervening words. They annotated these potential idioms as either literal or figurative, excluding ambiguous and unclear instances from the dataset.
<<</Gigaword>>>
<<<IDIX>>>
BIBREF10 build on the methodology of BIBREF14, but annotate a larger set of idioms (52 types) and extract all occurrences from the BNC rather than the Gigaword corpus, for a total of 4,022 instances including false extractions. BIBREF10 use a more complex semi-automatic extraction method, which involves parsing the corpus, manually defining the dependency patterns that match the PIE, and extracting all sentences containing those patterns from the corpus. This allows for larger form variations, including intervening words and inflectional variation of all words. In some cases, this yields many non-PIE extractions, as for recharge one's batteries in Example SECREF10. These were not filtered out before annotation, but rather filtered out as part of the annotation process, by having false extraction as an additional annotation label.
For sense annotation, they use an extensive tagset, distinguishing literal, non-literal, both, meta-linguistic, embedded, and undecided labels. Here, the both label (Example SECREF10) is used for cases where both senses are present, often as a form of deliberate word play. The meta-linguistic label (Example SECREF10) applies to cases where the PIE instance is used as a linguistic item to discuss, not as part of a sentence. The embedded label (Example SECREF10) applies to cases where the PIE is embedded in a larger figurative context, which makes it impossible to say whether a literal or figurative sense is more applicable. The undecided label is used for unclear and undecidable cases. They take into account the fact that a PIE can have multiple figurative senses, and enumerate these separately as part of the annotation.
. These high-performance, rugged tools are claimed to offer the best value for money on the market for the enthusiastic d-i-yer and tradesman, and for the first time offer the possibility of a battery recharging time of just a quarter of an hour. (from IDIX corpus, ID #314)
. Left holding the baby, single mothers find it hard to fend for themselves. (from BIBREF10, p.642)
. It has long been recognised that expressions such as to pull someone's leg, to have a bee in one's bonnet, to kick the bucket, to cook someone's goose, to be off one's rocker, round the bend, up the creek, etc. are semantically peculiar. (from BIBREF10, p.642)
. You're like a restless bird in a cage. When you get out of the cage, you'll fly very high. (from BIBREF10, p.642)
The both, meta-linguistic, and embedded labels are useful and linguistically interesting distinctions, although they occur very rarely (0.69%, 0.15%, and an unknown %, respectively). As such, we include these cases in our tagset (see Section SECREF5), but group them under a single label, other, to reduce annotation complexity. We also follow BIBREF10 in that we combine both the PIE/non-PIE annotation and the sense annotation in a single task.
<<</IDIX>>>
<<<SemEval-2013 Task 5b>>>
BIBREF15 created a dataset for SemEval-2013 Task 5b, a task on detecting semantic compositionality in context. They selected 65 PIE types from Wiktionary, and extracted instances from the ukWaC corpus BIBREF17, for a total of 4,350 instances. It is unclear how they extracted the instances, and how much variation was allowed for, although there is some inflectional variation in the dataset. An unspecified amount of manual filtering was done on the extracted instances.
The extracted PIE instances were labelled as literal, idiomatic, both, or undecidable. Interestingly, they crowdsourced the sense annotations using CrowdFlower, with high agreement (90%–94% pairwise). Undecidable cases and instances on which annotators disagreed were removed from the dataset.
<<</SemEval-2013 Task 5b>>>
<<<General Multiword Expression Corpora>>>
In addition to the aforementioned idiom corpora, there are also corpora focused on multiword expressions (MWEs) in a more general sense. As idioms are a subcategory of MWEs, these corpora also include some idioms. The most important of these are the PARSEME corpus BIBREF19 and the DiMSUM corpus BIBREF20.
DiMSUM provides annotations of over 5,000 MWEs in approximately 90K tokens of English text, consisting of reviews, tweets and TED talks. However, they do not categorise the MWEs into specific types, meaning we cannot easily quantify the number of idioms in the corpus. In contrast to the corpus-specific sense labels seen in other corpora, DiMSUM annotates MWEs with WordNet supersenses, which provide a broad category of meaning for each MWE.
Similarly, the PARSEME corpus consists of over 62K MWEs in almost 275K tokens of text across 18 different languages (with the notable exception of English). The main differences with DiMSUM, except for scale and multilingualism, are that it only includes verbal MWEs, and that subcategorisation is performed, including a specific category for idioms. Idioms make up almost a quarter of all verbal MWEs in the corpus, although the proportion varies wildly between languages. In both corpora, MWE annotation was done in an unrestricted manner, i.e. there was not a predefined set of expressions to which annotation was restricted.
<<</General Multiword Expression Corpora>>>
<<<Overview>>>
In sum, there is large variation in corpus creation methods, regarding PIE definition, extraction method, annotation schemes, base corpus, and PIE type inventory. Depending on the goal of the corpus, the amount of deviation that is allowed from the PIE's dictionary form to the instances can be very little BIBREF14, to quite a lot BIBREF10. The number of PIE types covered by each corpus is limited, ranging from 17 to 65 types, often limited to one or more syntactic patterns. The extraction of PIE instances is usually done in a semi-automatic manner, by manually defining patterns in a text or parse tree, and doing some manual filtering afterwards. This works well, but an extension to a large number of PIE types (e.g. several hundreds) would also require a large increase in the amount of manual effort involved. Considering the sense annotations done on the PIE corpora, there is significant variation, with BIBREF9 using only three tags, whereas BIBREF10 use six. Outside of PIE-specific corpora there are MWE corpora, which provide a different perspective. A major difference there is that annotation is not restricted to a pre-specified set of expressions, which has not been done for PIEs specifically.
<<</Overview>>>
<<</Annotated Corpora and Annotation Schemes for Idioms>>>
<<<Extracting Idioms from Corpora>>>
There are two main approaches to idiom extraction. The first approach aims to distinguish idioms from other multiword phrases, where the main purpose is to expand idiom inventories with rare or novel expressions BIBREF21, BIBREF22, BIBREF23, BIBREF24. The second approach aims to extract all occurrences of a known idiomatic expression in a text. In this paper, we focus on the latter approach. We rely on idiom dictionaries to provide a list of PIE types, and build a system that extracts all instances of those PIE types from a corpus. High-quality idiom dictionaries exist for most well-resourced languages, but their reliability and coverage is not known. As such, we quantify the coverage of dictionaries in Section SECREF4.
There is, to the best of our knowledge, no existing work that focuses on dictionary-based PIE extraction. However, there is closely-related work by BIBREF25, who present a system for the dictionary-based extraction of verb-noun combinations (VNCs) in English and Spanish. In their case, the VNCs can be any kind of multiword expression, which they subdivide into literal expressions, collocations, light verb constructions, metaphoric expressions, and idioms. They extract 173 English VNCs and 150 Spanish VNCs and annotate these with both their lexico-semantic MWE type and the amount of morphosyntactic variation they exhibit. BIBREF25 then compare a word sequence-based method, a chunking-based method, and a parse-based method for VNC extraction. Each method relies on the morpho-syntactic information in order to limit false extractions. Precision is evaluated manually on a sample of the extracted VNCs, and recall is estimated by calculating the overlap between the output of the three methods. Evaluation shows that the methods are highly complementary both in recall, since they extract different VNCs, and in precision, since combining the extractors yields fewer false extractions.
Whereas BIBREF25 focus on both idiomatic and literal uses of the set of expressions, like in this paper, BIBREF26 tackle only half of that task, namely extracting only literal uses of a given set of VMWEs in Polish. This complicates the task, since it combines extracting all occurrences of the VMWEs and then distinguishing literal from idiomatic uses. Interestingly, they also experiment with models of varying complexity, i.e. just words, part-of-speech tags, and syntactic structures. Their results are hard to put into perspective however, since the frequency of literal VMWEs in their corpus is very rare, whereas corpora containing PIEs tend to show a more balanced distribution.
Other similar work to ours also focuses on MWEs more generally, or on different subtypes of MWEs. In addition, these tend to combine both extraction and disambiguation in that they aim to extract only idiomatically used instances of the MWE, without extracting literally used instances or non-instances. Within this line of work, BIBREF27 focuses on verb-particle constructions, BIBREF28 on verbal MWEs (including idioms), and BIBREF29 on verbal MWEs (especially non-canonical variants).
Both BIBREF28 and BIBREF29 rely on a pre-defined set of expressions, whereas BIBREF27 also extracts unseen expressions, although based on a pre-defined set of particles and within the vary narrow syntactic frame of verb-particle constructions. The work of BIBREF27 is most similar to ours in that it builds an unsupervised system using existing NLP tools (PoS taggers, chunkers, parsers) and finds that a combination of systems using those tools performs best, as we find in Section SECREF69. BIBREF28 and BIBREF29, by contrast, use supervised classifiers which require training data, not just for the task in general, but specific to the set of expressions used in the task.
Although our approach is similar to that of BIBREF25, both in the range of methods used and in the goal of extracting certain multiword expressions regardless of morphosyntactic variation, there are two main differences. First, we use dictionaries, but extract entries automatically and do not manually annotate their type and variability. As a result, our methods rely only on the surface form of the expression taken from the dictionary. Second, we evaluate precision and recall in a more rigorous way, by using an evaluation corpus exhaustively annotated for PIEs. In addition, we do not put any restriction on the syntactic type of the expressions to be extracted, which BIBREF27, BIBREF28, BIBREF25, and BIBREF29 all do.
<<</Extracting Idioms from Corpora>>>
<<</Related Work>>>
<<<Coverage of Idiom Inventories>>>
<<<Background>>>
Since our goal is developing a dictionary-based system for extracting potentially idiomatic expressions, we need to devise a proper method for evaluating such a system. This is not straightforward, even though the final goal of such a system is simple: it should extract all potentially idiomatic expressions from a corpus and nothing else, regardless of their sense and the form they are used in. The type of system proposed here hence has two aspects that can be evaluated: the dictionary that it is using as a resource for idiomatic expression, and the extractor component that finds idioms in a corpus.
The difficulty here is that there is no undisputed and unambiguous definition of what counts as an idiom BIBREF30, as is the case with multiword expressions in general BIBREF12. Of course, a complete set of idiomatic expressions for English (or any other language) is impossible to get due to the broad and ever-changing nature of language. This incompleteness is exacerbated by the ambiguity problem: if we had a clear definition of idiom we could make an attempt of evaluating idiom dictionaries on their accuracy, but it is practically impossible to come up with a definition of idiom that leaves no room for ambiguity. This ambiguity, among others, creates a large grey area between clearly non-idiomatic phrases on the one hand (e.g. buy a house), and clear potentially idiomatic phrases on the other hand (e.g. buy the farm). As a consequence, we cannot empirically evaluate the coverage of the dictionaries. Instead, in this work, we will quantify the divergence between various idiom dictionaries and corpora, with regard to their idiom inventories. If they show large discrepancies, we take that to mean that either there is little agreement on definitions of idiom or the category is so broad that a single resource can only cover a small proportion. Conversely, if there is large agreement, we assume that idiom resources are largely reliable, and that there is consensus around what is, and what is not, an idiomatic expression.
We use different idiom resources and assume that the combined set of resources yields an approximation of the true set of idioms in English. A large divergence between the idiom inventories of these resources would then suggest a low recall for a single resource, since many other idioms are present in the other resources. Conversely, if the idiom inventories largely overlap, that indicates that a single resource can already yield decent coverage of idioms in the English language. The results of the dictionary comparisons are in Section SECREF36.
<<</Background>>>
<<<Selected Idiom Resources (Data and Method)>>>
We evaluate the quality of three idiom dictionaries by comparing them to each other and to three idiom corpora. Before we report on the comparison we first describe why we select and how we prepare these resources. We investigate the following six idiom resources:
Wiktionary;
the Oxford Dictionary of English Idioms (ODEI, BIBREF31);
UsingEnglish.com (UE);
the Sporleder corpus BIBREF10;
the VNC dataset BIBREF9;
and the SemEval-2013 Task 5 dataset BIBREF15.
These dictionaries were selected because they are available in digital format. Wiktionary and UsingEnglish have the added benefit of being freely available. However, they are both crowdsourced, which means they lack professional editing. In contrast, ODEI is a traditional dictionary, created and edited by lexicographers, but it has the downside of not being freely available.
For Wiktionary, we extracted all idioms from the category `English Idioms' from the English version of Wiktionary. We took the titles of all pages containing a dictionary entry and considered these idioms. Since we focus on multiword idiomatic expressions, we filtered out all single-word entries in this category. More specifically, since Wiktionary is a constantly changing resource, we used the 8,482 idioms retrieved on 10-03-2017, 15:30. We used a similar extraction method for UE, a web page containing freely available resources for ESL learners, including a list of idioms. We extracted all idioms which have publicly available definitions, which numbered 3,727 on 10-03-2017, 15:30. Again, single-word entries and duplicates were filtered out. Concerning ODEI, all idioms from the e-book version were extracted, amounting to 5,911 idioms scraped on 13-03-2017, 10:34. Here we performed an extra processing step to expand idioms containing content in parentheses, such as a tough (or hard) nut (to crack). Using a set of simple expansion rules and some hand-crafted exceptions, we automatically generated all variants for this idiom, with good, but not perfect accuracy. For the example above, the generated variants are: {a tough nut, a tough nut to crack, a hard nut, a hard nut to crack}. The idioms in the VNC dataset are in the form verb_noun, e.g. blow_top, so they were manually expanded to a regular dictionary form, e.g. blow one's top before comparison.
<<</Selected Idiom Resources (Data and Method)>>>
<<<Method>>>
In many cases, using simple string-match to check overlap in idioms does not work, as exact comparison of idioms misses equivalent idioms that differ only slightly in dictionary form. Differences between resources are caused by, for example:
inflectional variation (crossing the Rubicon — cross the Rubicon);
variation in scope (as easy as ABC — easy as ABC);
determiner variation (put the damper on — put a damper on);
spelling variation (mind your p's and q's — mind your ps and qs);
order variation (call off the dogs — call the dogs off);
and different conventions for placeholder words (recharge your batteries — recharge one's batteries), where both your and one's can generalise to any possessive personal pronoun.
These minor variations do not fundamentally change the nature of the idiom, and we should count these types of variation as belonging to the same idiom (see also BIBREF32, who devise a measure to quantify different types of variation allowed by specific MWEs). So, to get a good estimate of the true overlap between idiom resources, these variations need to be accounted for, which we do in our flexible matching approach.
There is one other case of variation not listed above, namely lexical variation (e.g. rub someone up the wrong way - stroke someone the wrong way). We do not abstract over this, since we consider lexical variation to be a more fundamental change to the nature of the idiom. That is, a lexical variant is an indicator of the coverage of the dictionary, where the other variations are due to different stylistic conventions and do not indicate actual coverage. In addition, it is easy to abstract over the other types of variation in an NLP application, but this is not the case for lexical variation.
The overlap counts are estimated by abstracting over all variations except lexical variation in a semi-automatic manner, using heuristics and manual checking. Potentially overlapping idioms are selected using the following set of heuristics: whether an idiom from one resource is a substring (including gaps) of an idiom in the other resource, whether the words of an idiom form a subset of the words of an idiom in the other resource, and whether there is an idiom in the other resource which has a Levenshtein ratio of over 0.8. The Levenshtein ratio is an indicator of the Levenshtein distance between the two idioms relative to their length. These potential matches are then judged manually on whether they are really forms of the same idiom or not.
<<</Method>>>
<<<Results>>>
The results of using exact string matching to quantify the overlap between the dictionaries is illustrated in Figure FIGREF37.
Overlap between the three dictionaries is low. A possible explanation for this lies with the different nature of the dictionaries. Oxford is a traditional dictionary, created and edited by professional lexicographers, whereas Wiktionary is a crowdsourced dictionary open to everyone, and UsingEnglish is similar, but focused on ESL-learners. It is likely that these different origins result in different idiom inventories. Similarly, we would expect that the overlap between a pair of traditional dictionaries, such as the ODEI and the Penguin Dictionary of English Idioms BIBREF33 would be significantly higher. It should also be noted, however, that comparisons between more similar dictionaries also found relatively little overlap (BIBREF34; BIBREF35). A counterpoint is provided by BIBREF36, who quantifies coverage of verb-particle constructions in three different dictionaries and finds large overlap – perhaps because verb-particle are a more restricted class.
As noted previously, using exact string matching is a very limited approach to calculating overlap. Therefore, we used heuristics and manual checking to get more precise numbers, as shown in Table TABREF39, which also includes the three corpora in addition to the three dictionaries. As the manual checking only involved judging similar idioms found in pairs of resources, we cannot calculate three-way overlap as in Figure FIGREF37. The counts of the pair-wise overlap between dictionaries differ significantly between the two methods, which serves to illustrate the limitations of using only exact string matching and the necessity of using more advanced methods and manual effort.
Several insights can be gained from the data in Table TABREF39. The relation between Wiktionary and the SemEval corpus is obvious (cf. Section SECREF12), given the 96.92% coverage. For the other dictionary-corpus pairs, the coverage increases proportionally with the size of the dictionary, except in the case of UsingEnglish and the Sporleder corpus. The proportional increase indicates no clear qualitative differences between the dictionaries, i.e. one does not have a significantly higher percentage of non-idioms than the other, when compared to the corpora.
Generally, overlap between dictionaries and corpora is low: the two biggest, ODEI and Wiktionary have only around 30% overlap, while the dictionaries also cover no more than approximately 70% of the idioms used in the various corpora. Overlap between the three corpora is also extremely low, at below 5%. This is unsurprising, since a new dataset is more interesting and useful when it covers a different set of idioms than used in an existing dataset, and thus is likely constructed with this goal in mind.
<<</Results>>>
<<</Coverage of Idiom Inventories>>>
<<<Corpus Annotation>>>
In order to evaluate the PIE extraction methods developed in this work (Section SECREF6), we exhaustively annotate an evaluation corpus with all instances of a pre-defined set of PIEs. As part of this, we come up with a workable definition of PIEs, and measure the reliability of PIE annotation by inter-annotator agreement.
Assuming that we have a set of idioms, the main problem of defining what is and what is not a potentially idiomatic expression is caused by variation. In principle, potentially idiomatic expression is an instance of a phrase that, when seen without context, could have either an idiomatic or a literal meaning. This is clearest for the dictionary form of the idiom, as in Example SECREF5. Literal uses generally allow all kinds of variation, but not all of these variations allow a figurative interpretation, e.g. Example SECREF5. However, how much variation an idiom can undergo while retaining its figurative interpretation is different for each expression, and judgements of this might vary from one speaker to the other. An example of this is spill the bean, a variant of spill the beans, in Example SECREF5 judged by BIBREF21 as being highly questionable. However, even here a corpus example can be found containing the same variant used in a figurative sense (Example SECREF5).
As such, we assume that we cannot know a priori which variants of an expression allow a figurative reading, and are thus a potentially idiomatic expression. Therefore we consider every possible morpho-syntactic variation of an idiom a PIE, regardless of whether it actually allows a figurative reading. We believe the boundaries of this variation can only be determined based on corpus evidence, and even then they are likely variable.
Note that a similar question is tackled by BIBREF26, when they establish the boundary between a `literal reading of a VMWE' and a `coincidental co-occurrence'. BIBREF26's answer is similar to ours, in that they count something as a literal reading of a VMWE if it `the same or equivalent dependencies hold between [the expression]'s components as in its canonical form'.
. John kicked the bucket last night.
. * The bucket, John kicked last night.
. ?? Azin spilled the bean. (from BIBREF21)
. Alba reveals Fantastic Four 2 details The Invisible Woman actress spills the bean on super sequel (from ukWaC)
<<<Evaluating the Extraction Methods>>>
Evaluating the extraction methods is easier than evaluating dictionary coverage, since the goal of the extraction component is more clearly delimited: given a set of PIEs from one or more dictionaries, extract all occurrences of those PIEs from a corpus. Thus, rather than dealing with the undefined set of all PIEs, we can work with a clearly defined and finite set of PIEs from a dictionary.
Because we have a clearly defined set of PIEs, we can exhaustively annotate a corpus for PIEs, and use that annotated corpus for automatic evaluation of extraction methods using recall and precision. This allows us to facilitate and speed up annotation by pre-extracting sentences possibly containing a PIE. After the corpus is annotated, the precision and recall can be easily estimated by comparing the extracted PIE instances to those marked in the corpus. The details of the corpus selection, dictionary selection, extraction heuristic and annotation procedure are presented in Section SECREF46, and the details and results of the various extraction methods are presented in Section SECREF6.
<<</Evaluating the Extraction Methods>>>
<<<Base Corpus and Idiom Selection>>>
As a base corpus, we use the XML version of the British National Corpus BIBREF37, because of its size, variety, and wide availability. The BNC is pre-segmented into s-units, which we take to be sentences, w-units, which we take to be words, and c-units, punctuation. We then extract the text of all w-units and c-units. We keep the sentence segmentation, resulting in a set of plain text sentences. All sentences are included, except for sentences containing <gap> elements, which are filtered out. These <gap> elements indicate places where material from the original has been left out, e.g. for anonymisation purposes. Since this can result in incomplete sentences that cannot be parsed correctly, we filter out sentences containing these gaps.
We use only the written part of the BNC. From this, we extract a set of documents with the aim of having as much genre variation as possible. To achieve this, we select the first document in each genre, as defined by the classCode attribute (e.g. nonAc, commerce, letters). The resulting set of 46 documents makes up our base corpus. Note that these documents vary greatly in size, which means the resulting corpus is varied, but not balanced in terms of size (Table TABREF43). The documents are split across a development and test set, as specified at the end of Section SECREF46. We exclude documents with IDs starting with A0 from all annotation and evaluation procedures, as these were used during development of the extraction tool and annotation guidelines.
As for the set of potentially idiomatic expressions, we use the intersection of the three dictionaries, Wiktionary, Oxford, and UsingEnglish. Based on the assumption that, if all three resources include a certain idiom, it must unquestionably be an idiom, we choose the intersection (also see Figure FIGREF37). This serves to exclude questionable entries, like at all, which is in Wiktionary. The final set of idioms used for these experiments consists of 591 different multiword expressions. Although we aim for wide coverage, this is a necessary trade-off to ensure quality. At the same time, it leaves us with a set of idiom types that is approximately ten times larger than present in existing corpora. The set of 591 idioms includes idioms with a large variety of syntactic patterns, of which the most frequent ones are shown in Table TABREF44. The statistics show that the types most prevalent in existing corpora, verb-noun and preposition-noun combinations, are indeed the most frequent ones, but that there is a sizeable minority of types that do not fall into those categories, including coordinated adjectives, coordinated nouns, and nouns with prepositional phrases. This serves to emphasise the necessity of not restricting corpora to a small set of syntactic patterns.
<<</Base Corpus and Idiom Selection>>>
<<<Extraction of PIE Candidates>>>
To annotate the corpus completely manually would require annotators to read the whole corpus, and cross-reference each sentence to a list of almost 600 PIEs, to check whether one of those PIEs occurs in a sentence. We do not consider this a feasible annotation settings, due to both the difficulty of recognising literal usages of idioms and the time cost needed to find enough PIEs, given their low overall frequency. As such, we use a pre-extraction step to present candidates for annotation to the human annotators.
Given the corpus and the set of PIEs, we heuristically extract the PIE candidates as follows: given an idiomatic expression, extract every sentence which contains all the defining words of the idiom, in any form. This ensures that all possibly matching sentences get extracted, while greatly pruning the amount of sentences for annotators to look at. In addition, it allows us to present the heuristically matched PIE type and corresponding words to the annotators, which makes it much easier to judge whether something is a PIE or not. This also means that annotators never have to go through the full list of PIEs during the annotation process.
Initially, the heuristic simply extracted any sentence containing all the required words, where a word is any of the inflectional variants of the words in the PIE, except for determiners and punctuation. This method produced large amounts of noise, that is, a set of PIE candidates with only a very low percentage of actual PIEs. This was caused by the presence of some highly frequent PIEs with very little defining lexical content, such as on the make, and in the running. For example, with the original method, every sentence containing the preposition on, and any inflectional form of the verb make was extracted, resulting in a huge number of non-PIE candidates.
To limit the amount of noise, two restrictions were imposed. The first restrictions disallows word order variation for PIEs which do not contain a verb. The rationale behind this is that word order variation is only possible with PIEs like spill the beans (e.g. the beans were spilled), and not with PIEs like in the running (*the running in??). The second restriction is that we limit the number of words that can be inserted between the words of a PIE, but only for PIEs like on the make, and in the running, i.e. PIEs which only contain prepositions, determiners and a single noun. The number of intervening words was limited to three tokens, allowing for some variation, as in Example SECREF45, but preventing sentences like Example SECREF45 from being extracted. This restriction could result in the loss of some PIE candidates with a large number of intervening words. However, the savings in annotation time clearly outweigh the small loss in recall in this situation.
. Either at New Year or before July you can anticipate a change in the everyday running of your life. (in the running - BNC - document CBC - sentence 458)
. [..] if [he] hung around near the goal or in the box for that matter instead of running all over the show [..] (in the running - BNC - document J1C - sentence 1341)
<<</Extraction of PIE Candidates>>>
<<<Annotation Procedure>>>
The manual annotation procedure consists of three different phases (pilot, double annotation, single annotation), followed by an adjudication step to resolve conflicting annotations. Two things are annotated: whether something is a PIE or not, and if it is a PIE, which sense the PIE is used in. In the first phase (0-100-*), we randomly select hundred of the 2,239 PIE candidates which are then annotated by three annotators. All annotators have a good command of English, are computational linguists, and familiar with the subject. The annotators include the first and last author of this paper.
The annotators were provided with a short set of guidelines, of which the main rule-of-thumb for labelling a phrase as a PIE is as follows: any phrase is a PIE when it contains all the words, with the same part-of-speech, and in the same grammatical relations as in the dictionary form of the PIE, ignoring determiners.
For sense annotation, annotators were to mark a PIE as idiomatic if it had a sense listed in one of the idiom dictionaries, and as literal if it had a meaning that is a regular composition of its component words. For cases which were undecidable due to lack of context, the ?-label was used. The other-label was used as a container label for all cases in which neither the literal or idiomatic sense was correct (e.g. meta-linguistic uses and embeddings in metaphorical frames, see also Section SECREF10).
The first phase of annotation serves to bring to light any inconsistencies between annotators and fill in any gaps in the annotation guidelines. The resulting annotations already show a reasonably high agreement of 0.74 Fleiss' Kappa. Table TABREF48 shows annotation details and agreement statistics for all three phases. The annotation tasks suffixed by -PIE indicate agreement on PIE/non-PIE annotation and the tasks suffixed by -sense indicate agreement on sense annotation for PIEs.
In the second phase of annotation (100-600-* & 600-1100-*), another 1000 of the 2239 PIE candidates are selected to be annotated by two pairs of annotators. This shows very high agreement, as shown in Table TABREF48. This is probably due to the improvement in guidelines and the discussion following the pilot round of annotation. The exception to this are the somewhat lower scores for the 600-1100-sense annotation task. Adjudication revealed that this is due almost exclusively because of a different interpretation of the literal and idiomatic senses of a single PIE type: on the ground. Excluding this PIE type, Fleiss' Kappa increases from 0.63 to 0.77.
Because of the high agreement on PIE annotation, we deem it sufficient for the remainder (1108 candidates) to be annotated by only the primary annotator in the third phase of annotation (1100-2239-*). The reliability of the single annotation can be checked by comparing the distribution of labels to the multi-annotated parts. This shows that it falls clearly within the ranges of the other parts, both in the proportion of PIEs and idiomatic senses (see Table TABREF49). The single-annotated part has 49.0% PIEs, which is only 4 percentage points above the 44.7% PIEs in the multi-annotated parts. The proportion of idioms is just 2 percentage points higher, with 55.9% versus 53.9.%.
Although inter-annotator agreement was high, there was still a significant number of cases in the triple and double annotated PIE candidate sets where not all annotators agreed. These cases were adjudicated through discussion by all annotators, until they were in agreement. In addition, all PIE candidates which initially received the ?-label (unclear or undecidable) for sense or PIE were resolved in the same manner. In the adjudication procedure, annotators were provided with additional context on each side of the idiom, in contrast to the single sentence provided during the initial annotation. The main reason to do adjudication, rather than simply discarding all candidates for which there was disagreement, was that we expected exactly those cases for which there are conflicting annotations to be the most interesting ones, since having non-standard properties would cause the annotations to diverge. Examples of such interesting non-standard cases are at sea as part of a larger satirical frame in Example SECREF46 and cut the mustard in Example SECREF46 where it is used in a headline as wordplay on a Cluedo character.
. The bovine heroine has connections with Cowpeace International, and deals with a huge treacle slick at sea. (at sea - BNC - document CBC - sentence 13550)
. Why not cut the Mustard? [..] WADDINGTON Games's proposal to axe Reverend Green from the board game Cluedo is a bad one. (cut the mustard - BNC - document CBC - sentence 14548)
We split the corpus at the document level. The corpus consists of 45 documents from the BNC, and we split it in such a way that the development set has 1,112 candidates across 22 documents and the test set has 1,127 candidates from 23 documents. Note that this means that the development and test set contain different genres. This ensures that we do not optimise our systems on genre-specific aspects of the data.
<<</Annotation Procedure>>>
<<</Corpus Annotation>>>
<<<Dictionary-based PIE Extraction>>>
We propose and implement four different extraction methods, of differing complexities: exact string match, fuzzy string match, inflectional string match, and parser-based extraction. Because of the absence of existing work on this task, we compare these methods to each other, where the more basic methods function as baselines. More complex methods serve to shine light on the difficulty of the PIE extraction task; if simple methods already work sufficiently well, the task is not as hard as expected, and vice versa. Below, each of the extraction methods is presented and discussed in detail.
<<<String-based Extraction Methods>>>
<<<Exact String Match>>>
This is, very simply, extracting all instances of the exact dictionary form of the PIE, from the tokenized text of the corpus. Word boundaries are taken into account, so at sea does not match `that seawater'. As a result, all inflectional and other variants of the PIE are ignored.
<<</Exact String Match>>>
<<<Fuzzy String Match>>>
Fuzzy string match is a rough way of dealing with morphological inflection of the words in a PIE. We match all words in the PIE, taking into account word boundaries, and allow for up to 3 additional letters at the end of each word. These 3 additional characters serve to cover inflectional suffixes.
<<</Fuzzy String Match>>>
<<<Inflectional String Match>>>
In inflectional string match, we aim to match all inflected variations of a PIE. This is done by generating all morphological variants of the words in a PIE, generating all combinations of those words, and then using exact string match as described earlier.
Generating morphological variations consists of three steps: part-of-speech tagging, morphological analysis, and morphological reinflection. Since inflectional variation only applies to verbs and nouns, we use the Spacy part-of-speech tagger to detect the verbs and nouns. Then, we apply the morphological analyser morpha to get the base, uninflected form of the word, and then use the morphological generation tool morphg to get all possible inflections of the word. Both tools are part of the Morph morphological processing suite BIBREF38. Note that the Morph tools depend on the part-of-speech tag in the input, so that a wrong PoS may lead to an incorrect set of morphological variants.
For a PIE like spill the beans, this results in the following set of variants: $\lbrace $spill the bean, spills the bean, spilled the bean, spilling the bean, spill the beans, spills the beans, spilled the beans, spilling the beans$\rbrace $. Since we generate up to 2 variants for each noun, and up to 4 variants for each verb, the number of variants for PIEs containing multiple verbs and nouns can get quite large. On average, 8 additional variants are generated for each potentially idiomatic expression.
<<</Inflectional String Match>>>
<<<Additional Steps>>>
For all string match-based methods, ways to improve performance are implemented, to make them as competitive as possible. Rather than doing exact string matching, we also allow words to be separated by something other than spaces, e.g. nuts-and-bolts for nuts and bolts. Additionally, there is an option to take into account case distinctions. With the case-sensitive option, case is preserved in the idiom lists, e.g. coals to Newcastle, and the string matching is done in a case-sensitive manner. This increases precision, e.g. by avoiding PIEs as part of proper names, but also comes at a cost of recall, e.g. for sentence-initial PIEs. Thirdly, there is the option to allow for a certain number of intervening words between each pair of words in the PIE. This should improve recall, at the cost of precision. For example, this would yield the true positive make a huge mountain out of a molehill for make a mountain out of a molehill, but also false positives like have a smoke and go for have a go.
A third shared property of the string-based methods is the processing of placeholders in PIEs. PIEs containing possessive pronoun placeholders, such as one's and someone's are expanded. That is, we remove the original PIE, and add copies of the PIE where the placeholder is replaced by one of the possessive personal pronouns. For example, a thorn in someone's side is replaced by a thorn in {my, your, his, ...} side. In the case of someone's, we also add a wildcard for any possessively used word, i.e. a thorn in —'s side, to match e.g. a thorn in Google's side. Similarly, we make sure that PIE entries containing —, such as the mother of all —, will match any word for — during extraction. We do the same for someone, for which we substitute objective pronouns. For one, this is not possible, since it is too hard to distinguish from the one used as a number.
<<</Additional Steps>>>
<<</String-based Extraction Methods>>>
<<<Parser-Based Extraction Methods>>>
Parser-based extraction is potentially the widest-coverage extraction method, with the capacity to extract both morphological and syntactic variants of the PIE. This should be robust against the most common modifications of the PIE, e.g. through word insertions (spill all the beans), passivisation (the beans were spilled), and abstract over articles (spill beans).
In this method, PIEs are extracted using the assumption that any sentence which contains the lemmata of the words in the PIE, in the same dependency relations as in the PIE, contains an instance of the PIE type in question. More concretely, this means that the parse of the sentence should contain the parse tree of the PIE as a subtree. This is illustrated in Figure FIGREF57, which shows the parse tree for the PIE lose the plot, parsed without context. Note that this is a subtree of the parse tree for the sentence `you might just lose the plot completely', which is shown in Figure FIGREF58. Since the sentence parse contains the parse of the PIE, we can conclude that the sentence contains an instance of that PIE and extract the span of the PIE instance.
All PIEs are parsed in isolation, based on the assumption that all PIEs can be parsed, since they are almost always well-formed phrases. However, not all PIEs will be parsed correctly, especially since there is no context to resolve ambiguity. Errors tend to occur at the part-of-speech level, where, for example, verb-object combinations like jump ship and touch wood are erroneously tagged as noun-noun compounds. An analysis of the impact of parser error on PIE extraction performance is presented in Section SECREF73. Initially, we use the Spacy parser for parsing both the PIEs and the sentences.
Next, the sentence is parsed, and the lemma of the top node of the parsed PIE is matched against the lemmata of the sentence parse. If a match is found, the parse tree of the PIE is matched against the subtree of the matching sentence parse node. If the whole PIE parse tree matches, the span ranging from the first PIE token to the last is extracted. This span can thus include words that are not directly part of the PIE's dictionary form, in order to account for insertions like ships were jumped for jump ship, or have a big heart for have a heart.
During the matching, articles (a/an/the) are ignored, and passivisation is accounted for with a special rule. In addition, a number of special cases are dealt with. These are PIEs containing someone('s), something('s), one's, or —. These words are used in PIEs as placeholders for a generic possessor (someone's/something's/one's), generic object (someone/something), or any word of the right PoS (—).
For someone's, and something's, we match any possessive pronoun, or (proper) noun + possessive marker. For one's, only possessive pronouns are matched, since this is a placeholder for reflexive possessors. For someone and something, any non-possessive pronoun or (proper) noun is matched.
For — wildcards, any word can be matched, as long as it has the right relation to the right head. An additional challenge with these wildcards is that PIEs containing them cannot be parsed, e.g. too — for words is not parseable. This is dealt with by substituting the — by a PoS-ambiguous word, such as fine, or back.
Two optional features are added to the parser-based method with the goal of making it more robust to parser errors: generalising over dependency relation labels, and generalising over dependency relation direction. We expect this to increase recall at the cost of precision. In the first no labels setting, we match parts of the parse tree which have the same head lemma and the same dependent lemma, regardless of the relation label. An example of this is Figure FIGREF60, which has the wrong relation label between up and ante. If labels are ignored, however, we can still extract the PIE instance in Figure FIGREF61, which has the correct label. In the no directionality setting, relation labels are also ignored, and in addition the directionality of the relation is ignored, that is, we allow for the reversal of heads and dependents. This benefits performance in a case like Figure FIGREF62, which has stock as the head of laughing in a compound relation, whereas the parse of the PIE (Figure FIGREF63) has laughing as the head of stock in a dobj relation.
Note that similar settings were implemented by BIBREF26, who detect literal uses of VMWEs using a parser-based method with either full labelled dependencies, unlabelled dependencies, or directionless unlabelled dependencies (which they call BagOfDeps). They find that recall increases when less restrictions on the dependencies are used, but that this does not hurt precision, as we would expect. However, we cannot draw too many conclusions from these results due to the small size of their evaluation set, which consists of just 72 literal VMWEs in total.
<<<In-Context Parsing>>>
Since the parser-based method parses PIEs without any context, it often finds an incorrect parse, as for jump ship in Figure FIGREF65. As such, we add an option to the method that aims to increase the number of correct parses by parsing the PIE within context, that is, within a sentence. This can greatly help to disambiguate the parse, as in Figure FIGREF66. If the number of correct parses goes up, the recall of the extraction method should also increase. Naturally, it can also be the case that a PIE is parsed correctly without context, and incorrectly with context. However, we expect the gains to outweigh the losses.
The challenge here is thus to collect example sentences containing the PIE. Since the whole point of this work is to extract PIEs from raw text, this provides a catch-22-like situation: we need to extract a sentence containing a PIE in order to extract sentences containing a PIE.
The workaround for this problem is to use the exact string matching method with the dictionary form of the PIE and a very large plain text corpus to gather example sentences. By only considering the exact dictionary form we both simplify the finding of example sentences and the extraction of the PIE's parse from the sentence parse.
In case multiple example sentences are found, the shortest sentence is selected, since we assume it is easiest to parse. This is also the reason we make use of very large corpora, to increase the likelihood of finding a short, simple sentence. The example sentence extraction method is modified in such a way that sentences where the PIE is used meta-linguistically in quotes, e.g. “the well-known English idiom `to spill the beans' has no equivalents in other languages”, are excluded, since they do not provide a natural context for parsing. When no example sentence can be found in the corpus, we back-off to parsing the PIE without context. After a parse has been found for each PIE (i.e. with or without context), the method proceeds identically to the regular parser-based method.
We make use of the combination of two large corpora for the extraction of example sentences: the English Wikipedia, and ukWaC BIBREF17. For the Wikipedia corpus, we use a dump (13-01-2016) of the English-language Wikipedia, and remove all Wikipedia markup. This is done using WikiExtractor. The resulting files still contain some mark-up, which is removed heuristically. The resulting corpus contains mostly clean, raw, untokenized text, numbering approximately 1.78 billion tokens.
As for ukWaC, all XML-markup was removed, and the corpus is converted to a one-sentence-per-line format. UkWaC is tokenized, which makes it difficult for a simple string match method to find PIEs containing punctuation, for example day in, day out. Therefore, all spaces before commas, apostrophes, and sentence-final punctuation are removed. The resulting corpus contains approximately 2.05 billion tokens, making for a total of 3.83 billion tokens in the combined ukWaC and Wikipedia corpus.
<<</In-Context Parsing>>>
<<</Parser-Based Extraction Methods>>>
<<<Analysis>>>
Broadly speaking, the PIE extraction systems presented above perform in line with expectations. It is nevertheless useful to see where the best-performing system misses out, and where improvements like in-context parsing help performance.
We analyse the shortcomings of the in-context parser-based system by looking at the false positives and false negatives on the development set. We consider the output of the system with best overall performance, since it will provide the clearest picture. The system extracts 529 PIEs in total, of which 54 are false extractions (false positives), and it misses 69 annotated PIE instances (false negatives). Most false positives stem from the system's failure to capture nuances of PIE annotation. This includes cases where PIEs contain, or are part of, proper nouns (Example SECREF73), PIEs that are part of coordination constructions (Example SECREF73), and incorrect attachments (Example SECREF73). Among these errors, sentences containing proper nouns are an especially frequent problem.
. Drama series include [..] airline security thrills in Cleared For Takeoff and Head Over Heels [..] (in the clear - BNC - document CBC - sentence 5177)
. They prefer silk, satin or lace underwear in tasteful black or ivory. (in the black - BNC - document CBC - sentence 14673)
. [..] `I saw this chap make something out of an ordinary piece of wood — he fashioned it into an exquisite work of art.' (out of the woods - BNC - document ABV - sentence 1300)
The main cause of false negatives are errors made by the parser. In order to correctly extract a PIE from a sentence, both the PIE and the sentence have to be parsed correctly, or at least parsed in the same way. This means a missed extraction can be caused by a wrong parse for the PIE or a wrong parse for the sentence. These two error types form the largest class of false negatives. Since some PIE types are rather frequent, a wrong parse for a single PIE type can potentially lead to a large number of missed extractions.
It is not surprising that the parser makes many mistakes, since idioms often have unusual syntactic constructions (e.g. come a cropper) and contain words where default part-of-speech tags lead to the wrong interpretation (e.g. round is a preposition in round the bend, not a noun or adjective). This is especially true when idioms are parsed without context, and hence, where in-context parsing provides the largest benefit: the number of PIEs which are parsed incorrectly drops, which leads to F1-scores on those types going from 0% to almost 100% (e.g. in light of and ring a bell). Since parser errors are the main contributor to false negatives, hurting recall, we can observe that parsing idioms in context serves to benefit only recall, by 7 percentage points, at only a small loss in precision.
We find that adding context mainly helps for parsing expressions which are structurally relatively simple, but still ambiguous, such as rub shoulders, laughing stock, and round the bend. Compare, for example, the parse trees for laughing stock in isolation and within the extracted context sentence in Figures FIGREF74 and FIGREF75. When parsed in isolation, the relation between the two words is incorrectly labelled as a compound relation, whereas in context it is correctly labelled as a direct object relation. Note however, that for the most difficult PIEs, embedding them in a context does solve the parsing problem: a syntactically odd phrase is hard to phrase (e.g. for the time being), and a syntactically odd phrase in a sentence makes for a syntactically odd sentence that is still hard to parse (e.g. `London for the time being had been abandoned.'). Finding example sentences turned out not to be a problem, since appropriate sentences were found for 559 of 591 PIE types.
An alternative method for reducing parser error is to use a different, better parser. The Spacy parser was mainly chosen for implementation convenience and speed, and there are parsers which have better performance, as measured on established parsing benchmarks. To investigate the effectiveness of this method, we used the Stanford Neural Dependency Parser BIBREF39 to extract PIEs in the regular parsing, in-context parsing and the no labels settings. In all cases, using the Stanford parser yielded worse extraction performance than the Spacy parser. A possible explanation for why a supposedly better parser performs worse here is that parsers are optimised and trained to do well on established benchmarks, which consist of complete sentences, often from news texts. This does not necessarily correlate with parsing performance on short (sentences containing) idiomatic phrases. As such, we cannot assume that better overall parsing performance implies PIE extraction performance.
It should be noted that, when assessing the quality of PIE extraction performance, the parser-based methods are sensitive to specific PIE types. That is, if a single PIE type is parsed incorrectly, then it is highly probable that all instances of that type are missed. If this type is also highly frequent, this means that a small change in actual performance yields a large change in evaluation scores. Our goal is to have a PIE extraction system that is robust across all PIE types, and thus the current evaluation setting does not align exactly with our aim.
Splitting out performance per PIE type reveals whether there is indeed a large variance in performance across types. Table TABREF76 shows the 25 most frequent PIE types in the corpus, and the performance of the in-context-parsing-based system on each. Except two cases (in the black and round the bend), we see that the performance is in the 80–100% range, even showing perfect performance on the majority of types.
For none of the types do we see low precision paired with high recall, which indicates that the parser never matches a highly frequent non-PIE phrase. For the system with the no labels and no-directionality options (per-type numbers not shown here), however, this does occur. For example, ignoring the labels for the parse of the PIE have a go leads to the erroneous matching of many sentences containing a form of have to go, which is highly frequent, thus leading to a large drop in precision.
Although performance is stable across the most frequent types, among the less frequent types it is more spotty. This hurts overall performance, and there are potential gains in mitigating the poor performance on these types, such as for the time being. At the same time, the string matching methods show much more stable performance across types, and some of them do so with very high precision. As such, a combination of two such methods could boost performance significantly. If we use a high-precision string match-based method, such as the exact string match variant with a precision of 97.35%, recall could be improved for the wrongly parsed PIE types, without a significant loss of precision.
We experiment with two such combinations, by simply taking the union of the sets of extracted idioms of both systems, and filtering out duplicates. Results are shown in Table TABREF77. Both combinations show the expected effect: a clear gain in recall at a minimal loss in precision. Compared to the in-context-parsing-based system, the combination with exact string matching yields a gain in recall of over 6%, and the combination with inflectional string matching yields an even bigger gain of almost 8%, at precision losses of 0.6% and 0.8%, respectively. This indicates that the systems are very much complementary in the PIEs they extract. It also means that, when used in practice, combining inflectional string matching and parse-based extraction is the most reliable configuration.
<<</Analysis>>>
<<</Dictionary-based PIE Extraction>>>
<<<Conclusions and Outlook>>>
We present an in-depth study on the automatic extraction of potentially idiomatic expressions based on dictionaries. The purpose of automatic dictionary-based extraction is, on the one hand, to function as a pre-extraction step in the building of a large idiom-annotated corpus. On the other hand, it can function as part of an idiom extraction system when combined with a disambiguation component. In both cases, the ultimate goal is to improve the processing of idiomatic expressions within NLP. This work consists of three parts: a comparative evaluation of the coverage of idiom dictionaries, the annotation of a PIE corpus, and the development and evaluation of several dictionary-based PIE extraction methods.
In the first part, we present a study of idiom dictionary coverage, which serves to answer the question of whether a single idiom dictionary, or a combination of dictionaries, can provide good coverage of the set of all English idioms. Based on the comparison of dictionaries to each other, we estimate that the overlap between them is limited, varying from 20% to 55%, which indicates a large divergence between the dictionaries. This can be explained by the fact that idioms vary widely by register, genre, language variety, and time period. In our case, it is also likely that the divergence is caused partly by the gap between crowdsourced dictionaries on the one hand, and a dictionary compiled by professional lexicographers on the other. Given these factors, we can conclude that a single dictionary cannot provide even close to complete coverage of English idioms, but that by combining dictionaries from various sources, significant gains can be made. Since `English idioms' are a diffuse and constantly changing set, we have no gold standard to compare to. As such, we conclude that multiple dictionaries should be used when possible, but that we cannot say any anything definitive on the coverage of dictionaries with regard to the complete set of English idioms (which can only be approximated in the first place). A more comprehensive of idiom resources could be made in the future by using more advanced automatic methods for matching, for example by using BIBREF32's (BIBREF32) method for measuring expression variability. This would make it easier to evaluate a larger number of dictionaries, since no manual effort would be required.
In the second part, we experiment with the exhaustive annotation of PIEs in a corpus of documents from the BNC. Using a set of 591 PIE types, much larger and more varied than in existing resources, we show that it is very much possible to establish a working definition of PIE that allows for a large amount of variation, while still being useful for reliable annotation. This resulted in high inter-annotator agreement, ranging from 0.74 to 0.91 Fleiss' Kappa. This means that we can build a resource to evaluate a wide-range idiom extraction system with relatively little effort. The final corpus of PIEs with sense annotations is publicly available consists of 2,239 PIE candidates, of which 1,050 actual PIEs instances, and contains 278 different PIE types.
Finally, several methods for the automatic extraction of PIE instances were developed and evaluated on the annotated PIE corpus. We tested methods of differing complexity, from simple string match to dependency parse-based extraction. Comparison of these methods revealed that the more computationally complex method, parser-based extraction, works best. Parser-based extraction is especially effective in capturing a larger amount of variation, but is less precise than string-based methods, mostly because of parser error. The best overall setting of this method, which parses idioms within context, yielded an F1-score of 89.13% on the test set. Parser error can be partly compensated by combining the parse-based method and the inflectional string match method, which yields an F1-score of 92.01% (on the development set). This aligns well with the findings by BIBREF27, who found that combining simpler and more complex methods improves over just using a simple method case for extracting verb-particle constructions. This level of performance means that we can use the tool in corpus building. This greatly reduces the amount of manual extraction effort involved, while still maintaining a high level of recall. We make the source code for the different systems publicly available. Note that, although used here in the context of PIE extraction, our methods are equally applicable to other phrase extraction tasks, for example the extraction of light-verb constructions, metaphoric constructions, collocations, or any other type of multiword expression (cf. BIBREF27, BIBREF25, BIBREF26). Similarly, our method can be conceived as a blueprint and extended to languages other than English. For this to be possible, for any given new language one would need a list of target expressions and, in the case of the parser-based method, a reliable syntactic parser. If this is not the case, the inflectional matching method can be used, which requires only a morphological analyser and generator. Obviously, for languages that are morphologically richer than English, one would need to develop strategies aimed at controlling non-exact matches, so as to enhance recall without sacrificing precision. Previous work on Italian, for example, has shown the feasibility of achieving such balance through controlled pattern matching BIBREF40. Languages that are typologically very different from English would obviously require a dedicated approach for the matching of PIEs in corpora, but the overall principles of extraction, using language-specific tools, could stay the same.
Currently, no corpora containing annotation of PIEs exist for languages other than English. However, the PARSEME corpus BIBREF19 already contains idioms (only idiomatic readings) for many languages and would only need annotation of literal usages of idioms to make up a set of PIEs. Paired with the Universal Dependencies project BIBREF41, which increasingly provides annotated data as well as processing tools for an ever growing number of languages, this seems an excellent starting point for creating PIE resources in multiple languages.
<<</Conclusions and Outlook>>>
<<</Title>>>
|
{
"references": [
"Abstract, Dictionary-based PIE Extraction"
],
"type": "disordered_section"
}
|
1911.08829
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Casting a Wide Net: Robust Extraction of Potentially Idiomatic Expressions
<<<Abstract>>>
Idiomatic expressions like `out of the woods' and `up the ante' present a range of difficulties for natural language processing applications. We present work on the annotation and extraction of what we term potentially idiomatic expressions (PIEs), a subclass of multiword expressions covering both literal and non-literal uses of idiomatic expressions. Existing corpora of PIEs are small and have limited coverage of different PIE types, which hampers research. To further progress on the extraction and disambiguation of potentially idiomatic expressions, larger corpora of PIEs are required. In addition, larger corpora are a potential source for valuable linguistic insights into idiomatic expressions and their variability. We propose automatic tools to facilitate the building of larger PIE corpora, by investigating the feasibility of using dictionary-based extraction of PIEs as a pre-extraction tool for English. We do this by assessing the reliability and coverage of idiom dictionaries, the annotation of a PIE corpus, and the automatic extraction of PIEs from a large corpus. Results show that combinations of dictionaries are a reliable source of idiomatic expressions, that PIEs can be annotated with a high reliability (0.74-0.91 Fleiss' Kappa), and that parse-based PIE extraction yields highly accurate performance (88% F1-score). Combining complementary PIE extraction methods increases reliability further, to over 92% F1-score. Moreover, the extraction method presented here could be extended to other types of multiword expressions and to other languages, given that sufficient NLP tools are available.
<<</Abstract>>>
<<<Introduction>>>
Idiomatic expressions pose a major challenge for a wide range of applications in natural language processing BIBREF0. These include machine translation BIBREF1, BIBREF2, semantic parsing BIBREF3, sentiment analysis BIBREF4, and word sense disambiguation BIBREF5. Idioms show significant syntactic and morphological variability (e.g. beans being spilled for spill the beans), which makes them hard to find automatically. Moreover, their non-compositional nature makes idioms really hard to interpret, because their meaning is often very different from the meanings of the words that make them up. Hence, successful systems need not only be able to recognise idiomatic expressions in text or dialogue, but they also need to give a proper interpretation to them. As a matter of fact, current language technology performs badly on idiom understanding, a phenomenon that perhaps has not received enough attention.
Nearly all current language technology used in NLP applications is based on supervised machine learning. This requires large amounts of labelled data. In the case of idiom interpretation, however, only small datasets are available. These contain just a couple of thousand idiom instances, covering only about fifty different types of idiomatic expressions. In fact, existing annotated corpora tend to cover only a small set of idiom types, comprising just a few syntactic patterns (e.g., verb-object combinations), of which a limited number of instances are extracted from a large corpus.
This is not surprising as preparing and compiling such corpora involves a large amount of manual extraction work, especially if one wants to allow for form variation in the idiomatic expressions (for example, extracting cooking all the books for cook the books). This work involves both the crafting of syntactic patterns to match potential idiomatic expressions and the filtering of false extractions (non-instances of the target expression e.g. due to wrong parses), and increases with the amount of idiom types included in the corpus (which, in the worst case, means an exponential increase in false extractions). Thus, building a large corpus of idioms, especially one that covers many types in many syntactic constructions, is costly. If a high-precision, high-recall system can be developed for the task of extracting the annotation candidates, this cost will be greatly reduced, making the construction of a large corpus much more feasible.
The variability of idioms has been a significant topic of interest among researchers of idioms. For example, BIBREF6 investigates the internal and external modification of a set of idioms in a large English corpus, whereas BIBREF7, quantifies and classifies the variation of a set of idioms in a large corpus of Dutch, setting up a useful taxonomy of variation types. Both find that, although idiomatic expressions mainly occur in their dictionary form, there is a significant minority of idiom instances that occur in non-dictionary variants. Additionally, BIBREF8 show that idiom variants retain their idiomatic meaning more often and are processed more easily than previously assumed. This emphasises the need for corpora covering idiomatic expressions to include these variants, and for tools to be robust in dealing with them.
As such, the aim of this article is to describe methods and provide tools for constructing larger corpora annotated with a wider range of idiom types than currently in existence due to the reduced amount of manual labour required. In this way we hope to stimulate further research in this area. In contrast to previous approaches, we want to catch as many idiomatic expressions as possible, and we achieve this by casting a wide net, that is, we consider the widest range of possible idiom variants first and then filter out any bycatch in a way that requires the least manual effort.
We expect research will benefit from having larger corpora by improving evaluation quality, by allowing for the training of better supervised systems, and by providing additional linguistic insight into idiomatic expressions. A reliable method for extracting idiomatic expressions is not only needed for building an annotated corpus, but can also be used as part of an automatic idiom processing pipeline. In such a pipeline, extracting potentially idiomatic expressions can be seen as a first step before idiom disambiguation, and the combination of the two modules then functions as an complete idiom extraction system.
The main research question that we aim to answer in this article is whether dictionary-based extraction of potentially idiomatic expressions is robust and reliable enough to facilitate the creation of wide-coverage sense-annotated idiom corpora.
By answering this question we make several contributions to research on multiword expressions, in particular that of idiom extraction. Firstly, we provide an overview of existing research on annotating idiomatic expressions in corpora, showing that current corpora cover only small sets of idiomatic types (Section SECREF3). Secondly, we quantify the coverage and reliability of a set of idiom dictionaries, demonstrating that there is little overlap between resources (Section SECREF4). Thirdly, we develop and release an evaluation corpus for extracting potentially idiomatic expressions from text (Section SECREF5). Finally, various extraction systems and combinations thereof are implemented, made available to the research community, and evaluated empirically (Section SECREF6).
<<</Introduction>>>
<<<New Terminology: Potentially Idiomatic Expression (PIE)>>>
The ambiguity of phrases like wake up and smell the coffee poses a terminological problem. Usually, these phrases are called idiomatic expressions, which is suitable when they are used in an idiomatic sense, but not so much when they are used in a literal sense. Therefore, we propose a new term: potentially idiomatic expressions, or PIEs for short. The term potentially idiomatic expression refers to those expressions which can have an idiomatic meaning, regardless of whether they actually have that meaning in a given context. So, see the light is a PIE in both `After another explanation, I finally saw the light' and `I saw the light of the sun through the trees', while it is an idiomatic expression in the first context, and a literal phrase in the latter context.
The processing of PIEs involves three main challenges: the discovery of (new) PIE types, the extraction of instances of known PIE types in text, and the disambiguation of PIE instances in context. Here, we propose calling the discovery task simply PIE discovery, the extraction task simply PIE extraction, and the disambiguation task PIE disambiguation. Note that these terms contrast with the terms used in existing research. There, the discovery task is called type-based idiom detection and the disambiguation task is called token-based idiom detection (cf. BIBREF10, BIBREF11), although this usage is not always consistent. Because these terms are very similar, they are potentially confusing, and that is why we propose novel terminology.
Other terminology comes from literature on multiword expressions (MWEs) more generally, i.e. not specific to idioms. Here, the task of finding new MWE types is called MWE discovery and finding instances of known MWE types is called MWE identification BIBREF12. Note, however, that MWE identification generally consists of finding only the idiomatic usages of these types (e.g. BIBREF13). This means that MWE identification consists of both the extraction and disambiguation tasks, performed jointly. In this work, we propose to split this into two separate tasks, and we are concerned only with the PIE extraction part, leaving PIE disambiguation as a separate problem.
<<</New Terminology: Potentially Idiomatic Expression (PIE)>>>
<<<Related Work>>>
This section is structured so as to reflect the dual contribution of the present work. First, we discuss existing resources annotated for idiomatic expressions. Second, we discuss existing approaches to the automatic extraction of idioms.
<<<Annotated Corpora and Annotation Schemes for Idioms>>>
There are four sizeable sense-annotated PIE corpora for English: the VNC-Tokens Dataset BIBREF9, the Gigaword dataset BIBREF14, the IDIX Corpus BIBREF10, and the SemEval-2013 Task 5 dataset BIBREF15. An overview of these corpora is presented in Table TABREF7.
<<<VNC-Tokens>>>
The VNC-Tokens dataset contains 53 different PIE types. BIBREF9 extract up to 100 instances from the British National Corpus for each type, for a total of 2,984 instances. These types are based on a pre-existing list of verb-noun combinations and were filtered for frequency and whether two idiom dictionaries both listed them. Instances were extracted automatically, by parsing the corpus and selecting all sentences with the right verb and noun in a direct-object relation. It is unclear whether the extracted sentences were manually checked, but no false extractions are mentioned in the paper or present in the dataset.
All extracted PIE instances were annotated for sense as either idiomatic, literal or unclear. This is a self-explanatory annotation scheme, but BIBREF9 note that senses are not binary, but can form a continuum. For example, the idiomaticity of have a word in `You have my word' is different from both the literal sense in `The French have a word for this' and the figurative sense in `My manager asked to have a word'. They instructed annotators to choose idiomatic or literal even in ambiguous middle-of-the-continuum cases, and restrict the unclear label only to cases where there is not enough context to disambiguate the meaning of the PIE.
<<</VNC-Tokens>>>
<<<Gigaword>>>
BIBREF14 present a corpus of 17 PIE types, for which they extracted all instances from the Gigaword corpus BIBREF18, yielding a total of 3,964 instances. BIBREF14 extracted these instances semi-automatically by manually defining all inflectional variants of the verb in the PIE and matching these in the corpus. They did not allow for inflectional variations in non-verb words, nor did they allow intervening words. They annotated these potential idioms as either literal or figurative, excluding ambiguous and unclear instances from the dataset.
<<</Gigaword>>>
<<<IDIX>>>
BIBREF10 build on the methodology of BIBREF14, but annotate a larger set of idioms (52 types) and extract all occurrences from the BNC rather than the Gigaword corpus, for a total of 4,022 instances including false extractions. BIBREF10 use a more complex semi-automatic extraction method, which involves parsing the corpus, manually defining the dependency patterns that match the PIE, and extracting all sentences containing those patterns from the corpus. This allows for larger form variations, including intervening words and inflectional variation of all words. In some cases, this yields many non-PIE extractions, as for recharge one's batteries in Example SECREF10. These were not filtered out before annotation, but rather filtered out as part of the annotation process, by having false extraction as an additional annotation label.
For sense annotation, they use an extensive tagset, distinguishing literal, non-literal, both, meta-linguistic, embedded, and undecided labels. Here, the both label (Example SECREF10) is used for cases where both senses are present, often as a form of deliberate word play. The meta-linguistic label (Example SECREF10) applies to cases where the PIE instance is used as a linguistic item to discuss, not as part of a sentence. The embedded label (Example SECREF10) applies to cases where the PIE is embedded in a larger figurative context, which makes it impossible to say whether a literal or figurative sense is more applicable. The undecided label is used for unclear and undecidable cases. They take into account the fact that a PIE can have multiple figurative senses, and enumerate these separately as part of the annotation.
. These high-performance, rugged tools are claimed to offer the best value for money on the market for the enthusiastic d-i-yer and tradesman, and for the first time offer the possibility of a battery recharging time of just a quarter of an hour. (from IDIX corpus, ID #314)
. Left holding the baby, single mothers find it hard to fend for themselves. (from BIBREF10, p.642)
. It has long been recognised that expressions such as to pull someone's leg, to have a bee in one's bonnet, to kick the bucket, to cook someone's goose, to be off one's rocker, round the bend, up the creek, etc. are semantically peculiar. (from BIBREF10, p.642)
. You're like a restless bird in a cage. When you get out of the cage, you'll fly very high. (from BIBREF10, p.642)
The both, meta-linguistic, and embedded labels are useful and linguistically interesting distinctions, although they occur very rarely (0.69%, 0.15%, and an unknown %, respectively). As such, we include these cases in our tagset (see Section SECREF5), but group them under a single label, other, to reduce annotation complexity. We also follow BIBREF10 in that we combine both the PIE/non-PIE annotation and the sense annotation in a single task.
<<</IDIX>>>
<<<SemEval-2013 Task 5b>>>
BIBREF15 created a dataset for SemEval-2013 Task 5b, a task on detecting semantic compositionality in context. They selected 65 PIE types from Wiktionary, and extracted instances from the ukWaC corpus BIBREF17, for a total of 4,350 instances. It is unclear how they extracted the instances, and how much variation was allowed for, although there is some inflectional variation in the dataset. An unspecified amount of manual filtering was done on the extracted instances.
The extracted PIE instances were labelled as literal, idiomatic, both, or undecidable. Interestingly, they crowdsourced the sense annotations using CrowdFlower, with high agreement (90%–94% pairwise). Undecidable cases and instances on which annotators disagreed were removed from the dataset.
<<</SemEval-2013 Task 5b>>>
<<<General Multiword Expression Corpora>>>
In addition to the aforementioned idiom corpora, there are also corpora focused on multiword expressions (MWEs) in a more general sense. As idioms are a subcategory of MWEs, these corpora also include some idioms. The most important of these are the PARSEME corpus BIBREF19 and the DiMSUM corpus BIBREF20.
DiMSUM provides annotations of over 5,000 MWEs in approximately 90K tokens of English text, consisting of reviews, tweets and TED talks. However, they do not categorise the MWEs into specific types, meaning we cannot easily quantify the number of idioms in the corpus. In contrast to the corpus-specific sense labels seen in other corpora, DiMSUM annotates MWEs with WordNet supersenses, which provide a broad category of meaning for each MWE.
Similarly, the PARSEME corpus consists of over 62K MWEs in almost 275K tokens of text across 18 different languages (with the notable exception of English). The main differences with DiMSUM, except for scale and multilingualism, are that it only includes verbal MWEs, and that subcategorisation is performed, including a specific category for idioms. Idioms make up almost a quarter of all verbal MWEs in the corpus, although the proportion varies wildly between languages. In both corpora, MWE annotation was done in an unrestricted manner, i.e. there was not a predefined set of expressions to which annotation was restricted.
<<</General Multiword Expression Corpora>>>
<<<Overview>>>
In sum, there is large variation in corpus creation methods, regarding PIE definition, extraction method, annotation schemes, base corpus, and PIE type inventory. Depending on the goal of the corpus, the amount of deviation that is allowed from the PIE's dictionary form to the instances can be very little BIBREF14, to quite a lot BIBREF10. The number of PIE types covered by each corpus is limited, ranging from 17 to 65 types, often limited to one or more syntactic patterns. The extraction of PIE instances is usually done in a semi-automatic manner, by manually defining patterns in a text or parse tree, and doing some manual filtering afterwards. This works well, but an extension to a large number of PIE types (e.g. several hundreds) would also require a large increase in the amount of manual effort involved. Considering the sense annotations done on the PIE corpora, there is significant variation, with BIBREF9 using only three tags, whereas BIBREF10 use six. Outside of PIE-specific corpora there are MWE corpora, which provide a different perspective. A major difference there is that annotation is not restricted to a pre-specified set of expressions, which has not been done for PIEs specifically.
<<</Overview>>>
<<</Annotated Corpora and Annotation Schemes for Idioms>>>
<<<Extracting Idioms from Corpora>>>
There are two main approaches to idiom extraction. The first approach aims to distinguish idioms from other multiword phrases, where the main purpose is to expand idiom inventories with rare or novel expressions BIBREF21, BIBREF22, BIBREF23, BIBREF24. The second approach aims to extract all occurrences of a known idiomatic expression in a text. In this paper, we focus on the latter approach. We rely on idiom dictionaries to provide a list of PIE types, and build a system that extracts all instances of those PIE types from a corpus. High-quality idiom dictionaries exist for most well-resourced languages, but their reliability and coverage is not known. As such, we quantify the coverage of dictionaries in Section SECREF4.
There is, to the best of our knowledge, no existing work that focuses on dictionary-based PIE extraction. However, there is closely-related work by BIBREF25, who present a system for the dictionary-based extraction of verb-noun combinations (VNCs) in English and Spanish. In their case, the VNCs can be any kind of multiword expression, which they subdivide into literal expressions, collocations, light verb constructions, metaphoric expressions, and idioms. They extract 173 English VNCs and 150 Spanish VNCs and annotate these with both their lexico-semantic MWE type and the amount of morphosyntactic variation they exhibit. BIBREF25 then compare a word sequence-based method, a chunking-based method, and a parse-based method for VNC extraction. Each method relies on the morpho-syntactic information in order to limit false extractions. Precision is evaluated manually on a sample of the extracted VNCs, and recall is estimated by calculating the overlap between the output of the three methods. Evaluation shows that the methods are highly complementary both in recall, since they extract different VNCs, and in precision, since combining the extractors yields fewer false extractions.
Whereas BIBREF25 focus on both idiomatic and literal uses of the set of expressions, like in this paper, BIBREF26 tackle only half of that task, namely extracting only literal uses of a given set of VMWEs in Polish. This complicates the task, since it combines extracting all occurrences of the VMWEs and then distinguishing literal from idiomatic uses. Interestingly, they also experiment with models of varying complexity, i.e. just words, part-of-speech tags, and syntactic structures. Their results are hard to put into perspective however, since the frequency of literal VMWEs in their corpus is very rare, whereas corpora containing PIEs tend to show a more balanced distribution.
Other similar work to ours also focuses on MWEs more generally, or on different subtypes of MWEs. In addition, these tend to combine both extraction and disambiguation in that they aim to extract only idiomatically used instances of the MWE, without extracting literally used instances or non-instances. Within this line of work, BIBREF27 focuses on verb-particle constructions, BIBREF28 on verbal MWEs (including idioms), and BIBREF29 on verbal MWEs (especially non-canonical variants).
Both BIBREF28 and BIBREF29 rely on a pre-defined set of expressions, whereas BIBREF27 also extracts unseen expressions, although based on a pre-defined set of particles and within the vary narrow syntactic frame of verb-particle constructions. The work of BIBREF27 is most similar to ours in that it builds an unsupervised system using existing NLP tools (PoS taggers, chunkers, parsers) and finds that a combination of systems using those tools performs best, as we find in Section SECREF69. BIBREF28 and BIBREF29, by contrast, use supervised classifiers which require training data, not just for the task in general, but specific to the set of expressions used in the task.
Although our approach is similar to that of BIBREF25, both in the range of methods used and in the goal of extracting certain multiword expressions regardless of morphosyntactic variation, there are two main differences. First, we use dictionaries, but extract entries automatically and do not manually annotate their type and variability. As a result, our methods rely only on the surface form of the expression taken from the dictionary. Second, we evaluate precision and recall in a more rigorous way, by using an evaluation corpus exhaustively annotated for PIEs. In addition, we do not put any restriction on the syntactic type of the expressions to be extracted, which BIBREF27, BIBREF28, BIBREF25, and BIBREF29 all do.
<<</Extracting Idioms from Corpora>>>
<<</Related Work>>>
<<<Coverage of Idiom Inventories>>>
<<<Background>>>
Since our goal is developing a dictionary-based system for extracting potentially idiomatic expressions, we need to devise a proper method for evaluating such a system. This is not straightforward, even though the final goal of such a system is simple: it should extract all potentially idiomatic expressions from a corpus and nothing else, regardless of their sense and the form they are used in. The type of system proposed here hence has two aspects that can be evaluated: the dictionary that it is using as a resource for idiomatic expression, and the extractor component that finds idioms in a corpus.
The difficulty here is that there is no undisputed and unambiguous definition of what counts as an idiom BIBREF30, as is the case with multiword expressions in general BIBREF12. Of course, a complete set of idiomatic expressions for English (or any other language) is impossible to get due to the broad and ever-changing nature of language. This incompleteness is exacerbated by the ambiguity problem: if we had a clear definition of idiom we could make an attempt of evaluating idiom dictionaries on their accuracy, but it is practically impossible to come up with a definition of idiom that leaves no room for ambiguity. This ambiguity, among others, creates a large grey area between clearly non-idiomatic phrases on the one hand (e.g. buy a house), and clear potentially idiomatic phrases on the other hand (e.g. buy the farm). As a consequence, we cannot empirically evaluate the coverage of the dictionaries. Instead, in this work, we will quantify the divergence between various idiom dictionaries and corpora, with regard to their idiom inventories. If they show large discrepancies, we take that to mean that either there is little agreement on definitions of idiom or the category is so broad that a single resource can only cover a small proportion. Conversely, if there is large agreement, we assume that idiom resources are largely reliable, and that there is consensus around what is, and what is not, an idiomatic expression.
We use different idiom resources and assume that the combined set of resources yields an approximation of the true set of idioms in English. A large divergence between the idiom inventories of these resources would then suggest a low recall for a single resource, since many other idioms are present in the other resources. Conversely, if the idiom inventories largely overlap, that indicates that a single resource can already yield decent coverage of idioms in the English language. The results of the dictionary comparisons are in Section SECREF36.
<<</Background>>>
<<<Selected Idiom Resources (Data and Method)>>>
We evaluate the quality of three idiom dictionaries by comparing them to each other and to three idiom corpora. Before we report on the comparison we first describe why we select and how we prepare these resources. We investigate the following six idiom resources:
Wiktionary;
the Oxford Dictionary of English Idioms (ODEI, BIBREF31);
UsingEnglish.com (UE);
the Sporleder corpus BIBREF10;
the VNC dataset BIBREF9;
and the SemEval-2013 Task 5 dataset BIBREF15.
These dictionaries were selected because they are available in digital format. Wiktionary and UsingEnglish have the added benefit of being freely available. However, they are both crowdsourced, which means they lack professional editing. In contrast, ODEI is a traditional dictionary, created and edited by lexicographers, but it has the downside of not being freely available.
For Wiktionary, we extracted all idioms from the category `English Idioms' from the English version of Wiktionary. We took the titles of all pages containing a dictionary entry and considered these idioms. Since we focus on multiword idiomatic expressions, we filtered out all single-word entries in this category. More specifically, since Wiktionary is a constantly changing resource, we used the 8,482 idioms retrieved on 10-03-2017, 15:30. We used a similar extraction method for UE, a web page containing freely available resources for ESL learners, including a list of idioms. We extracted all idioms which have publicly available definitions, which numbered 3,727 on 10-03-2017, 15:30. Again, single-word entries and duplicates were filtered out. Concerning ODEI, all idioms from the e-book version were extracted, amounting to 5,911 idioms scraped on 13-03-2017, 10:34. Here we performed an extra processing step to expand idioms containing content in parentheses, such as a tough (or hard) nut (to crack). Using a set of simple expansion rules and some hand-crafted exceptions, we automatically generated all variants for this idiom, with good, but not perfect accuracy. For the example above, the generated variants are: {a tough nut, a tough nut to crack, a hard nut, a hard nut to crack}. The idioms in the VNC dataset are in the form verb_noun, e.g. blow_top, so they were manually expanded to a regular dictionary form, e.g. blow one's top before comparison.
<<</Selected Idiom Resources (Data and Method)>>>
<<<Method>>>
In many cases, using simple string-match to check overlap in idioms does not work, as exact comparison of idioms misses equivalent idioms that differ only slightly in dictionary form. Differences between resources are caused by, for example:
inflectional variation (crossing the Rubicon — cross the Rubicon);
variation in scope (as easy as ABC — easy as ABC);
determiner variation (put the damper on — put a damper on);
spelling variation (mind your p's and q's — mind your ps and qs);
order variation (call off the dogs — call the dogs off);
and different conventions for placeholder words (recharge your batteries — recharge one's batteries), where both your and one's can generalise to any possessive personal pronoun.
These minor variations do not fundamentally change the nature of the idiom, and we should count these types of variation as belonging to the same idiom (see also BIBREF32, who devise a measure to quantify different types of variation allowed by specific MWEs). So, to get a good estimate of the true overlap between idiom resources, these variations need to be accounted for, which we do in our flexible matching approach.
There is one other case of variation not listed above, namely lexical variation (e.g. rub someone up the wrong way - stroke someone the wrong way). We do not abstract over this, since we consider lexical variation to be a more fundamental change to the nature of the idiom. That is, a lexical variant is an indicator of the coverage of the dictionary, where the other variations are due to different stylistic conventions and do not indicate actual coverage. In addition, it is easy to abstract over the other types of variation in an NLP application, but this is not the case for lexical variation.
The overlap counts are estimated by abstracting over all variations except lexical variation in a semi-automatic manner, using heuristics and manual checking. Potentially overlapping idioms are selected using the following set of heuristics: whether an idiom from one resource is a substring (including gaps) of an idiom in the other resource, whether the words of an idiom form a subset of the words of an idiom in the other resource, and whether there is an idiom in the other resource which has a Levenshtein ratio of over 0.8. The Levenshtein ratio is an indicator of the Levenshtein distance between the two idioms relative to their length. These potential matches are then judged manually on whether they are really forms of the same idiom or not.
<<</Method>>>
<<<Results>>>
The results of using exact string matching to quantify the overlap between the dictionaries is illustrated in Figure FIGREF37.
Overlap between the three dictionaries is low. A possible explanation for this lies with the different nature of the dictionaries. Oxford is a traditional dictionary, created and edited by professional lexicographers, whereas Wiktionary is a crowdsourced dictionary open to everyone, and UsingEnglish is similar, but focused on ESL-learners. It is likely that these different origins result in different idiom inventories. Similarly, we would expect that the overlap between a pair of traditional dictionaries, such as the ODEI and the Penguin Dictionary of English Idioms BIBREF33 would be significantly higher. It should also be noted, however, that comparisons between more similar dictionaries also found relatively little overlap (BIBREF34; BIBREF35). A counterpoint is provided by BIBREF36, who quantifies coverage of verb-particle constructions in three different dictionaries and finds large overlap – perhaps because verb-particle are a more restricted class.
As noted previously, using exact string matching is a very limited approach to calculating overlap. Therefore, we used heuristics and manual checking to get more precise numbers, as shown in Table TABREF39, which also includes the three corpora in addition to the three dictionaries. As the manual checking only involved judging similar idioms found in pairs of resources, we cannot calculate three-way overlap as in Figure FIGREF37. The counts of the pair-wise overlap between dictionaries differ significantly between the two methods, which serves to illustrate the limitations of using only exact string matching and the necessity of using more advanced methods and manual effort.
Several insights can be gained from the data in Table TABREF39. The relation between Wiktionary and the SemEval corpus is obvious (cf. Section SECREF12), given the 96.92% coverage. For the other dictionary-corpus pairs, the coverage increases proportionally with the size of the dictionary, except in the case of UsingEnglish and the Sporleder corpus. The proportional increase indicates no clear qualitative differences between the dictionaries, i.e. one does not have a significantly higher percentage of non-idioms than the other, when compared to the corpora.
Generally, overlap between dictionaries and corpora is low: the two biggest, ODEI and Wiktionary have only around 30% overlap, while the dictionaries also cover no more than approximately 70% of the idioms used in the various corpora. Overlap between the three corpora is also extremely low, at below 5%. This is unsurprising, since a new dataset is more interesting and useful when it covers a different set of idioms than used in an existing dataset, and thus is likely constructed with this goal in mind.
<<</Results>>>
<<</Coverage of Idiom Inventories>>>
<<<Corpus Annotation>>>
In order to evaluate the PIE extraction methods developed in this work (Section SECREF6), we exhaustively annotate an evaluation corpus with all instances of a pre-defined set of PIEs. As part of this, we come up with a workable definition of PIEs, and measure the reliability of PIE annotation by inter-annotator agreement.
Assuming that we have a set of idioms, the main problem of defining what is and what is not a potentially idiomatic expression is caused by variation. In principle, potentially idiomatic expression is an instance of a phrase that, when seen without context, could have either an idiomatic or a literal meaning. This is clearest for the dictionary form of the idiom, as in Example SECREF5. Literal uses generally allow all kinds of variation, but not all of these variations allow a figurative interpretation, e.g. Example SECREF5. However, how much variation an idiom can undergo while retaining its figurative interpretation is different for each expression, and judgements of this might vary from one speaker to the other. An example of this is spill the bean, a variant of spill the beans, in Example SECREF5 judged by BIBREF21 as being highly questionable. However, even here a corpus example can be found containing the same variant used in a figurative sense (Example SECREF5).
As such, we assume that we cannot know a priori which variants of an expression allow a figurative reading, and are thus a potentially idiomatic expression. Therefore we consider every possible morpho-syntactic variation of an idiom a PIE, regardless of whether it actually allows a figurative reading. We believe the boundaries of this variation can only be determined based on corpus evidence, and even then they are likely variable.
Note that a similar question is tackled by BIBREF26, when they establish the boundary between a `literal reading of a VMWE' and a `coincidental co-occurrence'. BIBREF26's answer is similar to ours, in that they count something as a literal reading of a VMWE if it `the same or equivalent dependencies hold between [the expression]'s components as in its canonical form'.
. John kicked the bucket last night.
. * The bucket, John kicked last night.
. ?? Azin spilled the bean. (from BIBREF21)
. Alba reveals Fantastic Four 2 details The Invisible Woman actress spills the bean on super sequel (from ukWaC)
<<<Evaluating the Extraction Methods>>>
Evaluating the extraction methods is easier than evaluating dictionary coverage, since the goal of the extraction component is more clearly delimited: given a set of PIEs from one or more dictionaries, extract all occurrences of those PIEs from a corpus. Thus, rather than dealing with the undefined set of all PIEs, we can work with a clearly defined and finite set of PIEs from a dictionary.
Because we have a clearly defined set of PIEs, we can exhaustively annotate a corpus for PIEs, and use that annotated corpus for automatic evaluation of extraction methods using recall and precision. This allows us to facilitate and speed up annotation by pre-extracting sentences possibly containing a PIE. After the corpus is annotated, the precision and recall can be easily estimated by comparing the extracted PIE instances to those marked in the corpus. The details of the corpus selection, dictionary selection, extraction heuristic and annotation procedure are presented in Section SECREF46, and the details and results of the various extraction methods are presented in Section SECREF6.
<<</Evaluating the Extraction Methods>>>
<<<Base Corpus and Idiom Selection>>>
As a base corpus, we use the XML version of the British National Corpus BIBREF37, because of its size, variety, and wide availability. The BNC is pre-segmented into s-units, which we take to be sentences, w-units, which we take to be words, and c-units, punctuation. We then extract the text of all w-units and c-units. We keep the sentence segmentation, resulting in a set of plain text sentences. All sentences are included, except for sentences containing <gap> elements, which are filtered out. These <gap> elements indicate places where material from the original has been left out, e.g. for anonymisation purposes. Since this can result in incomplete sentences that cannot be parsed correctly, we filter out sentences containing these gaps.
We use only the written part of the BNC. From this, we extract a set of documents with the aim of having as much genre variation as possible. To achieve this, we select the first document in each genre, as defined by the classCode attribute (e.g. nonAc, commerce, letters). The resulting set of 46 documents makes up our base corpus. Note that these documents vary greatly in size, which means the resulting corpus is varied, but not balanced in terms of size (Table TABREF43). The documents are split across a development and test set, as specified at the end of Section SECREF46. We exclude documents with IDs starting with A0 from all annotation and evaluation procedures, as these were used during development of the extraction tool and annotation guidelines.
As for the set of potentially idiomatic expressions, we use the intersection of the three dictionaries, Wiktionary, Oxford, and UsingEnglish. Based on the assumption that, if all three resources include a certain idiom, it must unquestionably be an idiom, we choose the intersection (also see Figure FIGREF37). This serves to exclude questionable entries, like at all, which is in Wiktionary. The final set of idioms used for these experiments consists of 591 different multiword expressions. Although we aim for wide coverage, this is a necessary trade-off to ensure quality. At the same time, it leaves us with a set of idiom types that is approximately ten times larger than present in existing corpora. The set of 591 idioms includes idioms with a large variety of syntactic patterns, of which the most frequent ones are shown in Table TABREF44. The statistics show that the types most prevalent in existing corpora, verb-noun and preposition-noun combinations, are indeed the most frequent ones, but that there is a sizeable minority of types that do not fall into those categories, including coordinated adjectives, coordinated nouns, and nouns with prepositional phrases. This serves to emphasise the necessity of not restricting corpora to a small set of syntactic patterns.
<<</Base Corpus and Idiom Selection>>>
<<<Extraction of PIE Candidates>>>
To annotate the corpus completely manually would require annotators to read the whole corpus, and cross-reference each sentence to a list of almost 600 PIEs, to check whether one of those PIEs occurs in a sentence. We do not consider this a feasible annotation settings, due to both the difficulty of recognising literal usages of idioms and the time cost needed to find enough PIEs, given their low overall frequency. As such, we use a pre-extraction step to present candidates for annotation to the human annotators.
Given the corpus and the set of PIEs, we heuristically extract the PIE candidates as follows: given an idiomatic expression, extract every sentence which contains all the defining words of the idiom, in any form. This ensures that all possibly matching sentences get extracted, while greatly pruning the amount of sentences for annotators to look at. In addition, it allows us to present the heuristically matched PIE type and corresponding words to the annotators, which makes it much easier to judge whether something is a PIE or not. This also means that annotators never have to go through the full list of PIEs during the annotation process.
Initially, the heuristic simply extracted any sentence containing all the required words, where a word is any of the inflectional variants of the words in the PIE, except for determiners and punctuation. This method produced large amounts of noise, that is, a set of PIE candidates with only a very low percentage of actual PIEs. This was caused by the presence of some highly frequent PIEs with very little defining lexical content, such as on the make, and in the running. For example, with the original method, every sentence containing the preposition on, and any inflectional form of the verb make was extracted, resulting in a huge number of non-PIE candidates.
To limit the amount of noise, two restrictions were imposed. The first restrictions disallows word order variation for PIEs which do not contain a verb. The rationale behind this is that word order variation is only possible with PIEs like spill the beans (e.g. the beans were spilled), and not with PIEs like in the running (*the running in??). The second restriction is that we limit the number of words that can be inserted between the words of a PIE, but only for PIEs like on the make, and in the running, i.e. PIEs which only contain prepositions, determiners and a single noun. The number of intervening words was limited to three tokens, allowing for some variation, as in Example SECREF45, but preventing sentences like Example SECREF45 from being extracted. This restriction could result in the loss of some PIE candidates with a large number of intervening words. However, the savings in annotation time clearly outweigh the small loss in recall in this situation.
. Either at New Year or before July you can anticipate a change in the everyday running of your life. (in the running - BNC - document CBC - sentence 458)
. [..] if [he] hung around near the goal or in the box for that matter instead of running all over the show [..] (in the running - BNC - document J1C - sentence 1341)
<<</Extraction of PIE Candidates>>>
<<<Annotation Procedure>>>
The manual annotation procedure consists of three different phases (pilot, double annotation, single annotation), followed by an adjudication step to resolve conflicting annotations. Two things are annotated: whether something is a PIE or not, and if it is a PIE, which sense the PIE is used in. In the first phase (0-100-*), we randomly select hundred of the 2,239 PIE candidates which are then annotated by three annotators. All annotators have a good command of English, are computational linguists, and familiar with the subject. The annotators include the first and last author of this paper.
The annotators were provided with a short set of guidelines, of which the main rule-of-thumb for labelling a phrase as a PIE is as follows: any phrase is a PIE when it contains all the words, with the same part-of-speech, and in the same grammatical relations as in the dictionary form of the PIE, ignoring determiners.
For sense annotation, annotators were to mark a PIE as idiomatic if it had a sense listed in one of the idiom dictionaries, and as literal if it had a meaning that is a regular composition of its component words. For cases which were undecidable due to lack of context, the ?-label was used. The other-label was used as a container label for all cases in which neither the literal or idiomatic sense was correct (e.g. meta-linguistic uses and embeddings in metaphorical frames, see also Section SECREF10).
The first phase of annotation serves to bring to light any inconsistencies between annotators and fill in any gaps in the annotation guidelines. The resulting annotations already show a reasonably high agreement of 0.74 Fleiss' Kappa. Table TABREF48 shows annotation details and agreement statistics for all three phases. The annotation tasks suffixed by -PIE indicate agreement on PIE/non-PIE annotation and the tasks suffixed by -sense indicate agreement on sense annotation for PIEs.
In the second phase of annotation (100-600-* & 600-1100-*), another 1000 of the 2239 PIE candidates are selected to be annotated by two pairs of annotators. This shows very high agreement, as shown in Table TABREF48. This is probably due to the improvement in guidelines and the discussion following the pilot round of annotation. The exception to this are the somewhat lower scores for the 600-1100-sense annotation task. Adjudication revealed that this is due almost exclusively because of a different interpretation of the literal and idiomatic senses of a single PIE type: on the ground. Excluding this PIE type, Fleiss' Kappa increases from 0.63 to 0.77.
Because of the high agreement on PIE annotation, we deem it sufficient for the remainder (1108 candidates) to be annotated by only the primary annotator in the third phase of annotation (1100-2239-*). The reliability of the single annotation can be checked by comparing the distribution of labels to the multi-annotated parts. This shows that it falls clearly within the ranges of the other parts, both in the proportion of PIEs and idiomatic senses (see Table TABREF49). The single-annotated part has 49.0% PIEs, which is only 4 percentage points above the 44.7% PIEs in the multi-annotated parts. The proportion of idioms is just 2 percentage points higher, with 55.9% versus 53.9.%.
Although inter-annotator agreement was high, there was still a significant number of cases in the triple and double annotated PIE candidate sets where not all annotators agreed. These cases were adjudicated through discussion by all annotators, until they were in agreement. In addition, all PIE candidates which initially received the ?-label (unclear or undecidable) for sense or PIE were resolved in the same manner. In the adjudication procedure, annotators were provided with additional context on each side of the idiom, in contrast to the single sentence provided during the initial annotation. The main reason to do adjudication, rather than simply discarding all candidates for which there was disagreement, was that we expected exactly those cases for which there are conflicting annotations to be the most interesting ones, since having non-standard properties would cause the annotations to diverge. Examples of such interesting non-standard cases are at sea as part of a larger satirical frame in Example SECREF46 and cut the mustard in Example SECREF46 where it is used in a headline as wordplay on a Cluedo character.
. The bovine heroine has connections with Cowpeace International, and deals with a huge treacle slick at sea. (at sea - BNC - document CBC - sentence 13550)
. Why not cut the Mustard? [..] WADDINGTON Games's proposal to axe Reverend Green from the board game Cluedo is a bad one. (cut the mustard - BNC - document CBC - sentence 14548)
We split the corpus at the document level. The corpus consists of 45 documents from the BNC, and we split it in such a way that the development set has 1,112 candidates across 22 documents and the test set has 1,127 candidates from 23 documents. Note that this means that the development and test set contain different genres. This ensures that we do not optimise our systems on genre-specific aspects of the data.
<<</Annotation Procedure>>>
<<</Corpus Annotation>>>
<<<Dictionary-based PIE Extraction>>>
We propose and implement four different extraction methods, of differing complexities: exact string match, fuzzy string match, inflectional string match, and parser-based extraction. Because of the absence of existing work on this task, we compare these methods to each other, where the more basic methods function as baselines. More complex methods serve to shine light on the difficulty of the PIE extraction task; if simple methods already work sufficiently well, the task is not as hard as expected, and vice versa. Below, each of the extraction methods is presented and discussed in detail.
<<<String-based Extraction Methods>>>
<<<Exact String Match>>>
This is, very simply, extracting all instances of the exact dictionary form of the PIE, from the tokenized text of the corpus. Word boundaries are taken into account, so at sea does not match `that seawater'. As a result, all inflectional and other variants of the PIE are ignored.
<<</Exact String Match>>>
<<<Fuzzy String Match>>>
Fuzzy string match is a rough way of dealing with morphological inflection of the words in a PIE. We match all words in the PIE, taking into account word boundaries, and allow for up to 3 additional letters at the end of each word. These 3 additional characters serve to cover inflectional suffixes.
<<</Fuzzy String Match>>>
<<<Inflectional String Match>>>
In inflectional string match, we aim to match all inflected variations of a PIE. This is done by generating all morphological variants of the words in a PIE, generating all combinations of those words, and then using exact string match as described earlier.
Generating morphological variations consists of three steps: part-of-speech tagging, morphological analysis, and morphological reinflection. Since inflectional variation only applies to verbs and nouns, we use the Spacy part-of-speech tagger to detect the verbs and nouns. Then, we apply the morphological analyser morpha to get the base, uninflected form of the word, and then use the morphological generation tool morphg to get all possible inflections of the word. Both tools are part of the Morph morphological processing suite BIBREF38. Note that the Morph tools depend on the part-of-speech tag in the input, so that a wrong PoS may lead to an incorrect set of morphological variants.
For a PIE like spill the beans, this results in the following set of variants: $\lbrace $spill the bean, spills the bean, spilled the bean, spilling the bean, spill the beans, spills the beans, spilled the beans, spilling the beans$\rbrace $. Since we generate up to 2 variants for each noun, and up to 4 variants for each verb, the number of variants for PIEs containing multiple verbs and nouns can get quite large. On average, 8 additional variants are generated for each potentially idiomatic expression.
<<</Inflectional String Match>>>
<<<Additional Steps>>>
For all string match-based methods, ways to improve performance are implemented, to make them as competitive as possible. Rather than doing exact string matching, we also allow words to be separated by something other than spaces, e.g. nuts-and-bolts for nuts and bolts. Additionally, there is an option to take into account case distinctions. With the case-sensitive option, case is preserved in the idiom lists, e.g. coals to Newcastle, and the string matching is done in a case-sensitive manner. This increases precision, e.g. by avoiding PIEs as part of proper names, but also comes at a cost of recall, e.g. for sentence-initial PIEs. Thirdly, there is the option to allow for a certain number of intervening words between each pair of words in the PIE. This should improve recall, at the cost of precision. For example, this would yield the true positive make a huge mountain out of a molehill for make a mountain out of a molehill, but also false positives like have a smoke and go for have a go.
A third shared property of the string-based methods is the processing of placeholders in PIEs. PIEs containing possessive pronoun placeholders, such as one's and someone's are expanded. That is, we remove the original PIE, and add copies of the PIE where the placeholder is replaced by one of the possessive personal pronouns. For example, a thorn in someone's side is replaced by a thorn in {my, your, his, ...} side. In the case of someone's, we also add a wildcard for any possessively used word, i.e. a thorn in —'s side, to match e.g. a thorn in Google's side. Similarly, we make sure that PIE entries containing —, such as the mother of all —, will match any word for — during extraction. We do the same for someone, for which we substitute objective pronouns. For one, this is not possible, since it is too hard to distinguish from the one used as a number.
<<</Additional Steps>>>
<<</String-based Extraction Methods>>>
<<<Parser-Based Extraction Methods>>>
Parser-based extraction is potentially the widest-coverage extraction method, with the capacity to extract both morphological and syntactic variants of the PIE. This should be robust against the most common modifications of the PIE, e.g. through word insertions (spill all the beans), passivisation (the beans were spilled), and abstract over articles (spill beans).
In this method, PIEs are extracted using the assumption that any sentence which contains the lemmata of the words in the PIE, in the same dependency relations as in the PIE, contains an instance of the PIE type in question. More concretely, this means that the parse of the sentence should contain the parse tree of the PIE as a subtree. This is illustrated in Figure FIGREF57, which shows the parse tree for the PIE lose the plot, parsed without context. Note that this is a subtree of the parse tree for the sentence `you might just lose the plot completely', which is shown in Figure FIGREF58. Since the sentence parse contains the parse of the PIE, we can conclude that the sentence contains an instance of that PIE and extract the span of the PIE instance.
All PIEs are parsed in isolation, based on the assumption that all PIEs can be parsed, since they are almost always well-formed phrases. However, not all PIEs will be parsed correctly, especially since there is no context to resolve ambiguity. Errors tend to occur at the part-of-speech level, where, for example, verb-object combinations like jump ship and touch wood are erroneously tagged as noun-noun compounds. An analysis of the impact of parser error on PIE extraction performance is presented in Section SECREF73. Initially, we use the Spacy parser for parsing both the PIEs and the sentences.
Next, the sentence is parsed, and the lemma of the top node of the parsed PIE is matched against the lemmata of the sentence parse. If a match is found, the parse tree of the PIE is matched against the subtree of the matching sentence parse node. If the whole PIE parse tree matches, the span ranging from the first PIE token to the last is extracted. This span can thus include words that are not directly part of the PIE's dictionary form, in order to account for insertions like ships were jumped for jump ship, or have a big heart for have a heart.
During the matching, articles (a/an/the) are ignored, and passivisation is accounted for with a special rule. In addition, a number of special cases are dealt with. These are PIEs containing someone('s), something('s), one's, or —. These words are used in PIEs as placeholders for a generic possessor (someone's/something's/one's), generic object (someone/something), or any word of the right PoS (—).
For someone's, and something's, we match any possessive pronoun, or (proper) noun + possessive marker. For one's, only possessive pronouns are matched, since this is a placeholder for reflexive possessors. For someone and something, any non-possessive pronoun or (proper) noun is matched.
For — wildcards, any word can be matched, as long as it has the right relation to the right head. An additional challenge with these wildcards is that PIEs containing them cannot be parsed, e.g. too — for words is not parseable. This is dealt with by substituting the — by a PoS-ambiguous word, such as fine, or back.
Two optional features are added to the parser-based method with the goal of making it more robust to parser errors: generalising over dependency relation labels, and generalising over dependency relation direction. We expect this to increase recall at the cost of precision. In the first no labels setting, we match parts of the parse tree which have the same head lemma and the same dependent lemma, regardless of the relation label. An example of this is Figure FIGREF60, which has the wrong relation label between up and ante. If labels are ignored, however, we can still extract the PIE instance in Figure FIGREF61, which has the correct label. In the no directionality setting, relation labels are also ignored, and in addition the directionality of the relation is ignored, that is, we allow for the reversal of heads and dependents. This benefits performance in a case like Figure FIGREF62, which has stock as the head of laughing in a compound relation, whereas the parse of the PIE (Figure FIGREF63) has laughing as the head of stock in a dobj relation.
Note that similar settings were implemented by BIBREF26, who detect literal uses of VMWEs using a parser-based method with either full labelled dependencies, unlabelled dependencies, or directionless unlabelled dependencies (which they call BagOfDeps). They find that recall increases when less restrictions on the dependencies are used, but that this does not hurt precision, as we would expect. However, we cannot draw too many conclusions from these results due to the small size of their evaluation set, which consists of just 72 literal VMWEs in total.
<<<In-Context Parsing>>>
Since the parser-based method parses PIEs without any context, it often finds an incorrect parse, as for jump ship in Figure FIGREF65. As such, we add an option to the method that aims to increase the number of correct parses by parsing the PIE within context, that is, within a sentence. This can greatly help to disambiguate the parse, as in Figure FIGREF66. If the number of correct parses goes up, the recall of the extraction method should also increase. Naturally, it can also be the case that a PIE is parsed correctly without context, and incorrectly with context. However, we expect the gains to outweigh the losses.
The challenge here is thus to collect example sentences containing the PIE. Since the whole point of this work is to extract PIEs from raw text, this provides a catch-22-like situation: we need to extract a sentence containing a PIE in order to extract sentences containing a PIE.
The workaround for this problem is to use the exact string matching method with the dictionary form of the PIE and a very large plain text corpus to gather example sentences. By only considering the exact dictionary form we both simplify the finding of example sentences and the extraction of the PIE's parse from the sentence parse.
In case multiple example sentences are found, the shortest sentence is selected, since we assume it is easiest to parse. This is also the reason we make use of very large corpora, to increase the likelihood of finding a short, simple sentence. The example sentence extraction method is modified in such a way that sentences where the PIE is used meta-linguistically in quotes, e.g. “the well-known English idiom `to spill the beans' has no equivalents in other languages”, are excluded, since they do not provide a natural context for parsing. When no example sentence can be found in the corpus, we back-off to parsing the PIE without context. After a parse has been found for each PIE (i.e. with or without context), the method proceeds identically to the regular parser-based method.
We make use of the combination of two large corpora for the extraction of example sentences: the English Wikipedia, and ukWaC BIBREF17. For the Wikipedia corpus, we use a dump (13-01-2016) of the English-language Wikipedia, and remove all Wikipedia markup. This is done using WikiExtractor. The resulting files still contain some mark-up, which is removed heuristically. The resulting corpus contains mostly clean, raw, untokenized text, numbering approximately 1.78 billion tokens.
As for ukWaC, all XML-markup was removed, and the corpus is converted to a one-sentence-per-line format. UkWaC is tokenized, which makes it difficult for a simple string match method to find PIEs containing punctuation, for example day in, day out. Therefore, all spaces before commas, apostrophes, and sentence-final punctuation are removed. The resulting corpus contains approximately 2.05 billion tokens, making for a total of 3.83 billion tokens in the combined ukWaC and Wikipedia corpus.
<<</In-Context Parsing>>>
<<</Parser-Based Extraction Methods>>>
<<<Analysis>>>
Broadly speaking, the PIE extraction systems presented above perform in line with expectations. It is nevertheless useful to see where the best-performing system misses out, and where improvements like in-context parsing help performance.
We analyse the shortcomings of the in-context parser-based system by looking at the false positives and false negatives on the development set. We consider the output of the system with best overall performance, since it will provide the clearest picture. The system extracts 529 PIEs in total, of which 54 are false extractions (false positives), and it misses 69 annotated PIE instances (false negatives). Most false positives stem from the system's failure to capture nuances of PIE annotation. This includes cases where PIEs contain, or are part of, proper nouns (Example SECREF73), PIEs that are part of coordination constructions (Example SECREF73), and incorrect attachments (Example SECREF73). Among these errors, sentences containing proper nouns are an especially frequent problem.
. Drama series include [..] airline security thrills in Cleared For Takeoff and Head Over Heels [..] (in the clear - BNC - document CBC - sentence 5177)
. They prefer silk, satin or lace underwear in tasteful black or ivory. (in the black - BNC - document CBC - sentence 14673)
. [..] `I saw this chap make something out of an ordinary piece of wood — he fashioned it into an exquisite work of art.' (out of the woods - BNC - document ABV - sentence 1300)
The main cause of false negatives are errors made by the parser. In order to correctly extract a PIE from a sentence, both the PIE and the sentence have to be parsed correctly, or at least parsed in the same way. This means a missed extraction can be caused by a wrong parse for the PIE or a wrong parse for the sentence. These two error types form the largest class of false negatives. Since some PIE types are rather frequent, a wrong parse for a single PIE type can potentially lead to a large number of missed extractions.
It is not surprising that the parser makes many mistakes, since idioms often have unusual syntactic constructions (e.g. come a cropper) and contain words where default part-of-speech tags lead to the wrong interpretation (e.g. round is a preposition in round the bend, not a noun or adjective). This is especially true when idioms are parsed without context, and hence, where in-context parsing provides the largest benefit: the number of PIEs which are parsed incorrectly drops, which leads to F1-scores on those types going from 0% to almost 100% (e.g. in light of and ring a bell). Since parser errors are the main contributor to false negatives, hurting recall, we can observe that parsing idioms in context serves to benefit only recall, by 7 percentage points, at only a small loss in precision.
We find that adding context mainly helps for parsing expressions which are structurally relatively simple, but still ambiguous, such as rub shoulders, laughing stock, and round the bend. Compare, for example, the parse trees for laughing stock in isolation and within the extracted context sentence in Figures FIGREF74 and FIGREF75. When parsed in isolation, the relation between the two words is incorrectly labelled as a compound relation, whereas in context it is correctly labelled as a direct object relation. Note however, that for the most difficult PIEs, embedding them in a context does solve the parsing problem: a syntactically odd phrase is hard to phrase (e.g. for the time being), and a syntactically odd phrase in a sentence makes for a syntactically odd sentence that is still hard to parse (e.g. `London for the time being had been abandoned.'). Finding example sentences turned out not to be a problem, since appropriate sentences were found for 559 of 591 PIE types.
An alternative method for reducing parser error is to use a different, better parser. The Spacy parser was mainly chosen for implementation convenience and speed, and there are parsers which have better performance, as measured on established parsing benchmarks. To investigate the effectiveness of this method, we used the Stanford Neural Dependency Parser BIBREF39 to extract PIEs in the regular parsing, in-context parsing and the no labels settings. In all cases, using the Stanford parser yielded worse extraction performance than the Spacy parser. A possible explanation for why a supposedly better parser performs worse here is that parsers are optimised and trained to do well on established benchmarks, which consist of complete sentences, often from news texts. This does not necessarily correlate with parsing performance on short (sentences containing) idiomatic phrases. As such, we cannot assume that better overall parsing performance implies PIE extraction performance.
It should be noted that, when assessing the quality of PIE extraction performance, the parser-based methods are sensitive to specific PIE types. That is, if a single PIE type is parsed incorrectly, then it is highly probable that all instances of that type are missed. If this type is also highly frequent, this means that a small change in actual performance yields a large change in evaluation scores. Our goal is to have a PIE extraction system that is robust across all PIE types, and thus the current evaluation setting does not align exactly with our aim.
Splitting out performance per PIE type reveals whether there is indeed a large variance in performance across types. Table TABREF76 shows the 25 most frequent PIE types in the corpus, and the performance of the in-context-parsing-based system on each. Except two cases (in the black and round the bend), we see that the performance is in the 80–100% range, even showing perfect performance on the majority of types.
For none of the types do we see low precision paired with high recall, which indicates that the parser never matches a highly frequent non-PIE phrase. For the system with the no labels and no-directionality options (per-type numbers not shown here), however, this does occur. For example, ignoring the labels for the parse of the PIE have a go leads to the erroneous matching of many sentences containing a form of have to go, which is highly frequent, thus leading to a large drop in precision.
Although performance is stable across the most frequent types, among the less frequent types it is more spotty. This hurts overall performance, and there are potential gains in mitigating the poor performance on these types, such as for the time being. At the same time, the string matching methods show much more stable performance across types, and some of them do so with very high precision. As such, a combination of two such methods could boost performance significantly. If we use a high-precision string match-based method, such as the exact string match variant with a precision of 97.35%, recall could be improved for the wrongly parsed PIE types, without a significant loss of precision.
We experiment with two such combinations, by simply taking the union of the sets of extracted idioms of both systems, and filtering out duplicates. Results are shown in Table TABREF77. Both combinations show the expected effect: a clear gain in recall at a minimal loss in precision. Compared to the in-context-parsing-based system, the combination with exact string matching yields a gain in recall of over 6%, and the combination with inflectional string matching yields an even bigger gain of almost 8%, at precision losses of 0.6% and 0.8%, respectively. This indicates that the systems are very much complementary in the PIEs they extract. It also means that, when used in practice, combining inflectional string matching and parse-based extraction is the most reliable configuration.
<<</Analysis>>>
<<</Dictionary-based PIE Extraction>>>
<<<Conclusions and Outlook>>>
We present an in-depth study on the automatic extraction of potentially idiomatic expressions based on dictionaries. The purpose of automatic dictionary-based extraction is, on the one hand, to function as a pre-extraction step in the building of a large idiom-annotated corpus. On the other hand, it can function as part of an idiom extraction system when combined with a disambiguation component. In both cases, the ultimate goal is to improve the processing of idiomatic expressions within NLP. This work consists of three parts: a comparative evaluation of the coverage of idiom dictionaries, the annotation of a PIE corpus, and the development and evaluation of several dictionary-based PIE extraction methods.
In the first part, we present a study of idiom dictionary coverage, which serves to answer the question of whether a single idiom dictionary, or a combination of dictionaries, can provide good coverage of the set of all English idioms. Based on the comparison of dictionaries to each other, we estimate that the overlap between them is limited, varying from 20% to 55%, which indicates a large divergence between the dictionaries. This can be explained by the fact that idioms vary widely by register, genre, language variety, and time period. In our case, it is also likely that the divergence is caused partly by the gap between crowdsourced dictionaries on the one hand, and a dictionary compiled by professional lexicographers on the other. Given these factors, we can conclude that a single dictionary cannot provide even close to complete coverage of English idioms, but that by combining dictionaries from various sources, significant gains can be made. Since `English idioms' are a diffuse and constantly changing set, we have no gold standard to compare to. As such, we conclude that multiple dictionaries should be used when possible, but that we cannot say any anything definitive on the coverage of dictionaries with regard to the complete set of English idioms (which can only be approximated in the first place). A more comprehensive of idiom resources could be made in the future by using more advanced automatic methods for matching, for example by using BIBREF32's (BIBREF32) method for measuring expression variability. This would make it easier to evaluate a larger number of dictionaries, since no manual effort would be required.
In the second part, we experiment with the exhaustive annotation of PIEs in a corpus of documents from the BNC. Using a set of 591 PIE types, much larger and more varied than in existing resources, we show that it is very much possible to establish a working definition of PIE that allows for a large amount of variation, while still being useful for reliable annotation. This resulted in high inter-annotator agreement, ranging from 0.74 to 0.91 Fleiss' Kappa. This means that we can build a resource to evaluate a wide-range idiom extraction system with relatively little effort. The final corpus of PIEs with sense annotations is publicly available consists of 2,239 PIE candidates, of which 1,050 actual PIEs instances, and contains 278 different PIE types.
Finally, several methods for the automatic extraction of PIE instances were developed and evaluated on the annotated PIE corpus. We tested methods of differing complexity, from simple string match to dependency parse-based extraction. Comparison of these methods revealed that the more computationally complex method, parser-based extraction, works best. Parser-based extraction is especially effective in capturing a larger amount of variation, but is less precise than string-based methods, mostly because of parser error. The best overall setting of this method, which parses idioms within context, yielded an F1-score of 89.13% on the test set. Parser error can be partly compensated by combining the parse-based method and the inflectional string match method, which yields an F1-score of 92.01% (on the development set). This aligns well with the findings by BIBREF27, who found that combining simpler and more complex methods improves over just using a simple method case for extracting verb-particle constructions. This level of performance means that we can use the tool in corpus building. This greatly reduces the amount of manual extraction effort involved, while still maintaining a high level of recall. We make the source code for the different systems publicly available. Note that, although used here in the context of PIE extraction, our methods are equally applicable to other phrase extraction tasks, for example the extraction of light-verb constructions, metaphoric constructions, collocations, or any other type of multiword expression (cf. BIBREF27, BIBREF25, BIBREF26). Similarly, our method can be conceived as a blueprint and extended to languages other than English. For this to be possible, for any given new language one would need a list of target expressions and, in the case of the parser-based method, a reliable syntactic parser. If this is not the case, the inflectional matching method can be used, which requires only a morphological analyser and generator. Obviously, for languages that are morphologically richer than English, one would need to develop strategies aimed at controlling non-exact matches, so as to enhance recall without sacrificing precision. Previous work on Italian, for example, has shown the feasibility of achieving such balance through controlled pattern matching BIBREF40. Languages that are typologically very different from English would obviously require a dedicated approach for the matching of PIEs in corpora, but the overall principles of extraction, using language-specific tools, could stay the same.
Currently, no corpora containing annotation of PIEs exist for languages other than English. However, the PARSEME corpus BIBREF19 already contains idioms (only idiomatic readings) for many languages and would only need annotation of literal usages of idioms to make up a set of PIEs. Paired with the Universal Dependencies project BIBREF41, which increasingly provides annotated data as well as processing tools for an ever growing number of languages, this seems an excellent starting point for creating PIE resources in multiple languages.
<<</Conclusions and Outlook>>>
<<</Title>>>
|
{
"references": [
"Introduction, Related Work"
],
"type": "disordered_section"
}
|
1910.11235
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Rethinking Exposure Bias In Language Modeling
<<<Abstract>>>
Exposure bias describes the phenomenon that a language model trained under the teacher forcing schema may perform poorly at the inference stage when its predictions are conditioned on its previous predictions unseen from the training corpus. Recently, several generative adversarial networks (GANs) and reinforcement learning (RL) methods have been introduced to alleviate this problem. Nonetheless, a common issue in RL and GANs training is the sparsity of reward signals. In this paper, we adopt two simple strategies, multi-range reinforcing, and multi-entropy sampling, to amplify and denoise the reward signal. Our model produces an improvement over competing models with regards to BLEU scores and road exam, a new metric we designed to measure the robustness against exposure bias in language models.
<<</Abstract>>>
<<<Introduction>>>
Likelihood-based language models with deep neural networks have been widely adopted to tackle language tasks BIBREF0, BIBREF1, BIBREF2, BIBREF3. By far, one of the most popular training strategies is teacher forcing, which derives from the general maximum likelihood estimation (MLE) principle BIBREF4. Under the teacher forcing schema, a model is trained to make predictions conditioned on ground-truth inputs. Although this strategy enables effective training of large neural networks, it is susceptible to aggravate exposure bias: a model may perform poorly at the inference stage, once its self-generated prefix diverges from the previously learned ground-truth data BIBREF5.
A common approach to mitigate this problem is to impose supervision upon the model's own exploration. To this objective, existing literature have introduced REINFORCE BIBREF6 and actor-critic (AC) methods BIBREF7 (including language GANs BIBREF8), which offer direct feedback on a model's self-generated sequences, so the model can later, at the inference stage, deal with previously unseen exploratory paths. However, due to the well-known issue of reward sparseness and the potential noises in the critic's feedback, these methods are reported to risk compromising the generation quality, specifically in terms of precision.
In this paper, we adopt two simple strategies, multi-range reinforcing and multi-entropy sampling to overcome the reward sparseness during training. With the tricks applied, our model demonstrates a significant improvement over competing models. In addition, we propose road exam as a new metric to reveal a model's robustness against exposure bias.
<<</Introduction>>>
<<<Related Works>>>
As an early work to address exposure bias, BIBREF5 proposed a curriculum learning approach called scheduled sampling, which gradually replaces the ground-truth tokens with the model's own predictions while training. Later, BIBREF9 criticized this approach for pushing the model towards overfitting onto the corpus distribution based on the position of each token in the sequence, instead of learning about the prefix.
In recent RL-inspired works, BIBREF10 built on the REINFORCE algorithm to directly optimize the test-time evaluation metric score. BIBREF11 employed a similar approach by training a critic network to predict the metric score that the actor's generated sequence of tokens would obtain. In both cases, the reliance on a metric to accurately reflect the quality of generated samples becomes a major limitation. Such metrics are often unavailable and difficult to design by nature.
In parallel, adversarial training was introduced into language modeling by SeqGAN BIBREF8. This model consists of a generator pre-trained under MLE and a discriminator pre-trained to discern the generator's distribution from the real data. Follow-up works based on SeqGAN alter their training objectives or model architectures to enhance the guidance signal's informativeness. RankGAN replaces the absolute binary reward with a relative ranking score BIBREF12. LeakGAN allows the discriminator to “leak” its internal states to the generator at intermediate steps BIBREF13. BIBREF14 models a reward function using inverse reinforcement learning (IRL). While much progress have been made, we surprisingly observed that SeqGAN BIBREF8 shows more stable results in road exam in Section SECREF20. Therefore, we aim to amplify and denoise the reward signal in a direct and simple fashion.
<<</Related Works>>>
<<<Model Description>>>
Problem Re-Formulation: Actor-Critic methods (ACs) consider language modeling as a generalized Markov Decision Process (MDP) problem, where the actor learns to optimize its policy guided by the critic, while the critic learns to optimize its value function based on the actor's output and external reward information.
As BIBREF15 points out, GAN methods can be seen as a special case of AC where the critic aims to distinguish the actor's generation from real data and the actor is optimized in an opposite direction to the critic.
Actor-Critic Training: In this work, we use a standard single-layer LSTM as the actor network. The training objective is to maximize the model's expected end rewards with policy gradient BIBREF16:
Then, We use a CNN as the critic to predict the expected rewards for current generated prefix:
In practice, we perform a Monte-Carlo (MC) search with roll-out policy following BIBREF8 to sample complete sentences starting from each location in a predicted sequence and compute their end rewards. Empirically, we found out that the maximum, instead of average, of rewards in the MC search better represents each token's actor value and yields better results during training. Therefore, we compute the action value by:
In RL and GANs training, two major factors behind the unstable performance are the large variance and the update correlation during the sampling process BIBREF17, BIBREF18. We address these problems using the following strategies:
Multi-Range Reinforcing: Our idea of multi-range supervision takes inspiration from deeply-supervised nets (DSNs) BIBREF19. Under deep supervision, intermediate layers of a deep neural network have their own training objectives and receive direct supervision simultaneously with the final decision layer. By design, lower layers in a CNN have smaller receptive fields, allowing them to make better use of local patterns. Our “multi-range" modification enables the critic to focus on local n-gram information in the lower layers while attending to global structural information in the higher layers. This is a solution to the high variance problem, as the actor can receive amplified reward with more local information compared to BIBREF8.
Multi-Entropy Sampling: Language GANs can be seen an online RL methods, where the actor is updated from data generated by its own policy with strong correlation. Inspired by BIBREF20, we empirically find that altering the entropy of the actor's sample distribution during training is beneficial to the AC network's robust performance. In specific, we alternate the temperature $\tau $ to generate samples under different behavior policies. During the critic's training, the ground-truth sequences are assigned a perfect target value of 1. The samples obtained with $\tau < 1$ are supposed to contain lower entropy and to diverge less from the real data, that they receive a higher target value close to 1. Those obtained with $\tau > 1$ contain higher entropy and more errors that their target values are lower and closer to 0. This mechanism decorrelates updates during sequential sampling by sampling multiple diverse entropy distributions from actor synchronously.
<<<Effectiveness of Multi-Range Reinforcing and Multi-Entropy Sampling>>>
Table TABREF5 demonstrates an ablation study on the effectiveness of multi-range reinforcing (MR) and multi-entropy sampling (ME). We observe that ME improves $\text{BLEU}_{\text{F5}}$ (precision) significantly while MR further enhances $\text{BLEU}_{\text{F5}}$ (precision) and $\text{BLEU}_{\text{F5}}$ (recall). Detailed explanations of these metrics can be found in Section SECREF4.
<<</Effectiveness of Multi-Range Reinforcing and Multi-Entropy Sampling>>>
<<</Model Description>>>
<<<Model Evaluation>>>
<<<Modeling Capacity & Sentence Quality>>>
We adopt three variations of BLEU metric from BIBREF14 to reflect precision and recall.
$\textbf {BLEU}_{\textbf {F}}$, or forward BLEU, is a metric for precision. It uses the real test dataset as references to calculate how many n-grams in the generated samples can be found in the real data.
$\textbf {BLEU}_{\textbf {B}}$, or backward BLEU, is a metric for recall. This metric takes both diversity and quality into computation. A model with severe mode collapse or diverse but incorrect outputs will receive poor scores in $\text{BLEU}_{\text{B}}$.
$\textbf {BLEU}_{\textbf {HA}}$ is the harmonic mean of $\text{BLEU}_{\text{F}}$ and $\text{BLEU}_{\text{B}}$, given by:
<<</Modeling Capacity & Sentence Quality>>>
<<<Exposure Bias Attacks>>>
Road Exam is a novel test we propose as a direct evaluation of exposure bias. In this test, a sentence prefix of length $K$, either taken from the training or testing dataset, is fed into the model under assessment to perform a sentence completion task. Thereby, the model is directed onto either a seen or an unseen “road" to begin its generation. Because precision is the primary concern, we set $\tau =0.5$ to sample high-confidence sentences from each model's distribution. We compare $\text{BLEU}_{\text{F}}$ of each model on both seen and unseen completion tasks and over a range of prefix lengths. By definition, a model with exposure bias should perform worse in completing sentences with unfamiliar prefix. The sentence completion quality should decay more drastically as the the unfamiliar prefix grows longer.
<<</Exposure Bias Attacks>>>
<<</Model Evaluation>>>
<<<Experiment>>>
<<<Datasets>>>
We evaluate on two datasets: EMNLP2017 WMT News and Google-small, a subset of Google One Billion Words .
EMNLP2017 WMT News is provided in BIBREF21, a benchmarking platform for text generation models. We split the entire dataset into a training set of 195,010 sentences, a validation set of 83,576 sentences, and a test set of 10,000 sentences. The vocabulary size is 5,254 and the average sentence length is 27.
Google-small is sampled and pre-processed from its the Google One Billion Words. It contains a training set of 699,967 sentences, a validation set of 200,000 sentences, and a test set of 99,985 sentences. The vocabulary size is 61,458 and the average sentence length is 29.
<<</Datasets>>>
<<<Implementation Details>>>
<<<Network Architecture:>>>
We implement a standard single-layer LSTM as the generator (actor) and a eight-layer CNN as the discriminator (critic). The LSTM has embedding dimension 32 and hidden dimension 256. The CNN consists of 8 layers with filter size 3, where the 3rd, 5th, and 8th layers are directly connected to the output layer for multi-range supervision. Other parameters are consistent with BIBREF21.
<<</Network Architecture:>>>
<<<Training Settings:>>>
Adam optimizer is deployed for both critic and actor with learning rate $10^{-4}$ and $5 \cdot 10^{-3}$ respectively. The target values for the critic network are set to [0, 0.2, 0.4, 0.6, 0.8] for samples generated by the RNN with softmax temperatures [0.5, 0.75, 1.0, 1.25, 1.5].
<<</Training Settings:>>>
<<</Implementation Details>>>
<<<Discussion>>>
Table TABREF9 and Table TABREF10 compare models on EMNLP2017 WMT News and Google-small. Our model outperforms the others in $\text{BLEU}_{\text{F5}}$, $\text{BLEU}_{\text{B5}}$, and $\text{BLEU}_{\text{HA5}}$, indicating a high diversity and quality in its sample distribution. It is noteworthy that, LeakGAN and our model are the only two models to demonstrate improvements on $\text{BLEU}_{\text{B5}}$ over the teacher forcing baseline. The distinctive increment in recall indicates less mode collapse, which is a common problem in language GANs and ACs.
Figure FIGREF16 demonstrates the road exam results on EMWT News. All models decrease in sampling precision (reflected via $\text{BLEU}_{\text{F4}}$) as the fed-in prefix length ($K$) increases, but the effect is stronger on the unseen test data, revealing the existence of exposure bias. Nonetheless, our model trained under ME and MR yields the best sentence quality and a relatively moderate performance decline.
Although TF and SS demonstrate higher $\text{BLEU}_{\text{F5}}$ performance with shorter prefixes, their sentence qualities drop drastically on the test dataset with longer prefixes. On the other hand, GANs begin with lower $\text{BLEU}_{\text{F4}}$ precision scores but demonstrate less performance decay as the prefix grows longer and gradually out-perform TF. This robustness against unseen prefixes exhibits that supervision from a learned critic can boost a model's stability in completing unseen sequences.
The better generative quality in TF and the stronger robustness against exposure bias in GANs are two different objectives in language modeling, but they can be pursued at the same time. Our model's improvement in both perspectives exhibit one possibility to achieve the goal.
<<</Discussion>>>
<<</Experiment>>>
<<<Conclusion>>>
We have presented multi-range reinforcing and multi-entropy sampling as two training strategies built upon deeply supervised nets BIBREF19 and multi-entropy samplingBIBREF20. The two easy-to-implement strategies help alleviate the reward sparseness in RL training and tackle the exposure bias problem.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Abstract, Experiment"
],
"type": "disordered_section"
}
|
1909.00107
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Behavior Gated Language Models
<<<Abstract>>>
Most current language modeling techniques only exploit co-occurrence, semantic and syntactic information from the sequence of words. However, a range of information such as the state of the speaker and dynamics of the interaction might be useful. In this work we derive motivation from psycholinguistics and propose the addition of behavioral information into the context of language modeling. We propose the augmentation of language models with an additional module which analyzes the behavioral state of the current context. This behavioral information is used to gate the outputs of the language model before the final word prediction output. We show that the addition of behavioral context in language models achieves lower perplexities on behavior-rich datasets. We also confirm the validity of the proposed models on a variety of model architectures and improve on previous state-of-the-art models with generic domain Penn Treebank Corpus.
<<</Abstract>>>
<<<Introduction>>>
Recurrent neural network language models (RNNLM) can theoretically model the word history over an arbitrarily long length of time and thus have been shown to perform better than traditional n-gram models BIBREF0. Recent prior work has continuously improved the performance of RNNLMs through hyper-parameter tuning, training optimization methods, and development of new network architectures BIBREF1, BIBREF2, BIBREF3, BIBREF4.
On the other hand, many work have proposed the use of domain knowledge and additional information such as topics or parts-of-speech to improve language models. While syntactic tendencies can be inferred from a few preceding words, semantic coherence may require longer context and high level understanding of natural language, both of which are difficult to learn through purely statistical methods. This problem can be overcome by exploiting external information to capture long-range semantic dependencies. One common way of achieving this is by incorporating part-of-speech (POS) tags into the RNNLM as an additional feature to predict the next word BIBREF5, BIBREF6. Other useful linguistic features include conversation-type, which was shown to improve language modeling when combined with POS tags BIBREF7. Further improvements were achieved through the addition of socio-situational setting information and other linguistic features such as lemmas and topic BIBREF8.
The use of topic information to provide semantic context to language models has also been studied extensively BIBREF9, BIBREF10, BIBREF11, BIBREF12. Topic models are useful for extracting high level semantic structure via latent topics which can aid in better modeling of longer documents.
Recently, however, empirical studies involving investigation of different network architectures, hyper-parameter tuning, and optimization techniques have yielded better performance than the addition of contextual information BIBREF13, BIBREF14. In contrast to the majority of work that focus on improving the neural network aspects of RNNLM, we introduce psycholinguistic signals along with linguistic units to improve the fundamental language model.
In this work, we utilize behavioral information embedded in the language to aid the language model. We hypothesize that different psychological behavior states incite differences in the use of language BIBREF15, BIBREF16, and thus modeling these tendencies can provide useful information in statistical language modeling. And although not directly related, behavioral information may also correlate with conversation-type and topic. Thus, we propose the use of psycholinguistic behavior signals as a gating mechanism to augment typical language models. Effectively inferring behaviors from sources like spoken text, written articles can lead to personification of the language models in the speaker-writer arena.
<<</Introduction>>>
<<<Methodology>>>
In this section, we first describe a typical RNN based language model which serves as a baseline for this study. Second, we introduce the proposed behavior prediction model for extracting behavioral information. Finally, the proposed architecture of the language model which incorporates the behavioral information through a gating mechanism is presented.
<<<Language Model>>>
The basic RNNLM consists of a vanilla unidirectional LSTM which predicts the next word given the current and its word history at each time step. In other words, given a sequence of words $ \mathbf {x} \hspace{2.77771pt}{=}\hspace{2.77771pt}x_1, x_2, \ldots x_n$ as input, the network predicts a probability distribution of the next word $ y $ as $ P(y \mid \mathbf {x}) $. Figure FIGREF2 illustrates the basic architecture of the RNNLM.
Since our contribution is towards introducing behavior as a psycholinguistic feature for aiding the language modeling process, we stick with a reliable and simple LSTM-based RNN model and follow the recommendations from BIBREF1 for our baseline model.
<<</Language Model>>>
<<<Behavior Model>>>
The analysis and processing of human behavior informatics is crucial in many psychotherapy settings such as observational studies and patient therapy BIBREF17. Prior work has proposed the application of neural networks in modeling human behavior in a variety of clinical settings BIBREF18, BIBREF19, BIBREF20.
In this work we adopt a behavior model that predicts the likelihood of occurrence of various behaviors based on input text. Our model is based on the RNN architecture in Figure FIGREF2, but instead of the next word we predict the joint probability of behavior occurrences $ P(\mathbf {B} \mid \mathbf {x}) $ where $ \mathbf {B} \hspace{2.77771pt}{=}\hspace{2.77771pt}\lbrace b_{i}\rbrace $ and $ b_{i} $ is the occurrence of behavior $i$. In this work we apply the behaviors: Acceptance, Blame, Negativity, Positivity, and Sadness. This is elaborated more on in Section SECREF3.
<<</Behavior Model>>>
<<<Behavior Gated Language Model>>>
<<<Motivation>>>
Behavior understanding encapsulates a long-term trajectory of a person's psychological state. Through the course of communication, these states may manifest as short-term instances of emotion or sentiment. Previous work has studied the links between these psychological states and their effect on vocabulary and choice of words BIBREF15 as well as use of language BIBREF16. Motivated from these studies, we hypothesize that due to the duality of behavior and language we can improve language models by capturing variability in language use caused by different psychological states through the inclusion of behavioral information.
<<</Motivation>>>
<<<Proposed Model>>>
We propose to augment RNN language models with a behavior model that provides information relating to a speaker's psychological state. This behavioral information is combined with hidden layers of the RNNLM through a gating mechanism prior to output prediction of the next word. In contrast to typical language models, we propose to model $ P(\mathbf {y} \mid \mathbf {x}, \mathbf {z}) $ where $ \mathbf {z} \equiv f( P(\mathbf {B}\mid \mathbf {x}))$ for an RNN function $f(\cdot )$. The behavior model is implemented with a multi-layered RNN over the input sequence of words. The first recurrent layer of the behavior model is initialized with pre-trained weights from the model described in Section SECREF3 and fixed during language modeling training. An overview of the proposed behavior gated language model is shown in Figure FIGREF6. The RNN units shaded in green (lower section) denote the pre-trained weights from the behavior classification model which are fixed during the entirety of training. The abstract behavior outputs $ b_t $ of the pre-trained model are fed into a time-synced RNN, denoted in blue (upper section), which is subsequently used for gating the RNNLM predictions. The un-shaded RNN units correspond to typical RNNLM and operate in parallel to the former.
<<</Proposed Model>>>
<<</Behavior Gated Language Model>>>
<<</Methodology>>>
<<<Experimental Setup>>>
<<<Data>>>
<<<Behavior Related Corpora>>>
For evaluating the proposed model on behavior related data, we employ the Couples Therapy Corpus (CoupTher) BIBREF21 and Cancer Couples Interaction Dataset (Cancer) BIBREF22. These are the targeted conditions under which a behavior-gated language model can offer improved performance.
Couples Therapy Corpus: This corpus comprises of dyadic conversations between real couples seeking marital counseling. The dataset consists of audio, video recordings along with their transcriptions. Each speaker is rated by multiple annotators over 33 behaviors. The dataset comprises of approximately 0.83 million words with 10,000 unique entries of which 0.5 million is used for training (0.24m for dev and 88k for test).
Cancer Couples Interaction Dataset: This dataset was gathered as part of a observational study of couples coping with advanced cancer. Advanced cancer patients and their spouse caregivers were recruited from clinics and asked to interact with each other in two structured discussions: neutral discussion and cancer related. Interactions were audio-recorded using small digital recorders worn by each participant. Manually transcribed audio has approximately 230,000 word tokens with a vocabulary size of 8173.
<<<Couple's Therapy Corpus>>>
We utilize the Couple's Therapy Corpus as an in-domain experimental corpus since our behavior classification model is also trained on the same. The RNNLM architecture is similar to BIBREF1, but with hyperparameters optimized for the couple's corpus. The results are tabulated in Table TABREF16 in terms of perplexity. We find that the behavior gated language models yield lower perplexity compared to vanilla LSTM language model. A relative improvement of 2.43% is obtained with behavior gating on the couple's data.
<<</Couple's Therapy Corpus>>>
<<<Cancer Couples Interaction Dataset>>>
To evaluate the validity of the proposed method on an out-of-domain but behavior related task, we utilize the Cancer Couples Interaction Dataset. Here both the language and the behavior models are trained on the Couple's Therapy Corpus. The Cancer dataset is used only for development (hyper-parameter tuning) and testing. We observe that the behavior gating helps achieve lower perplexity values with a relative improvement of 6.81%. The performance improvements on an out-of-domain task emphasizes the effectiveness of behavior gated language models.
<<</Cancer Couples Interaction Dataset>>>
<<</Behavior Related Corpora>>>
<<<Penn Tree Bank Corpus>>>
In order to evaluate our proposed model on more generic language modeling tasks, we employ Penn Tree bank (PTB) BIBREF23, as preprocessed by BIBREF24. Since Penn Tree bank mainly comprises of articles from Wall Street Journal it is not expected to contain substantial expressions of behavior.
<<<Previous state-of-the-art architectures>>>
Finally we apply behavior gating on a previous state-of-the-art architecture, one that is most often used as a benchmark over various recent works. Specifically, we employ the AWD-LSTM proposed by BIBREF2 with QRNN BIBREF25 instead of LSTM. We observe positive results with AWD-LSTM augmented with behavior-gating providing a relative improvement of (1.42% on valid) 0.66% in perplexity (Table TABREF17).
<<</Previous state-of-the-art architectures>>>
<<</Penn Tree Bank Corpus>>>
<<</Data>>>
<<<Hyperparameters>>>
We augmented previous RNN language model architectures by BIBREF1 and BIBREF2 with our proposed behavior gates. We used the same architecture as in each work to maintain similar number of parameters and performed a grid search of hyperparameters such as learning rate, dropout, and batch size. The number of layers and size of the final layers of the behavior model was also optimized. We report the results of models based on the best validation result.
<<</Hyperparameters>>>
<<</Experimental Setup>>>
<<<Results>>>
We split the results into two parts. We first validate the proposed technique on behavior related language modeling tasks and then apply it on more generic domain Penn Tree bank dataset.
<<</Results>>>
<<<Conclusion & Future Work>>>
In this study, we introduce the state of the speaker/author into language modeling in the form of behavior signals. We track 5 behaviors namely acceptance, blame, negativity, positivity and sadness using a 5 class multi-label behavior classification model. The behavior states are used as gating mechanism for a typical RNN based language model. We show through our experiments that the proposed technique improves language modeling perplexity specifically in the case of behavior-rich scenarios. Finally, we show improvements on the previous state-of-the-art benchmark model with Penn Tree Bank Corpus to underline the affect of behavior states in language modeling.
In future, we plan to incorporate the behavior-gated language model into the task of automatic speech recognition (ASR). In such scenario, we could derive both the past and the future behavior states from the ASR which could then be used to gate the language model using two pass re-scoring strategies. We expect the behavior states to be less prone to errors made by ASR over a sufficiently long context and hence believe the future behavior states to provide further improvements.
<<</Conclusion & Future Work>>>
<<</Title>>>
|
{
"references": [
"Introduction, Results"
],
"type": "disordered_section"
}
|
1909.00107
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Behavior Gated Language Models
<<<Abstract>>>
Most current language modeling techniques only exploit co-occurrence, semantic and syntactic information from the sequence of words. However, a range of information such as the state of the speaker and dynamics of the interaction might be useful. In this work we derive motivation from psycholinguistics and propose the addition of behavioral information into the context of language modeling. We propose the augmentation of language models with an additional module which analyzes the behavioral state of the current context. This behavioral information is used to gate the outputs of the language model before the final word prediction output. We show that the addition of behavioral context in language models achieves lower perplexities on behavior-rich datasets. We also confirm the validity of the proposed models on a variety of model architectures and improve on previous state-of-the-art models with generic domain Penn Treebank Corpus.
<<</Abstract>>>
<<<Introduction>>>
Recurrent neural network language models (RNNLM) can theoretically model the word history over an arbitrarily long length of time and thus have been shown to perform better than traditional n-gram models BIBREF0. Recent prior work has continuously improved the performance of RNNLMs through hyper-parameter tuning, training optimization methods, and development of new network architectures BIBREF1, BIBREF2, BIBREF3, BIBREF4.
On the other hand, many work have proposed the use of domain knowledge and additional information such as topics or parts-of-speech to improve language models. While syntactic tendencies can be inferred from a few preceding words, semantic coherence may require longer context and high level understanding of natural language, both of which are difficult to learn through purely statistical methods. This problem can be overcome by exploiting external information to capture long-range semantic dependencies. One common way of achieving this is by incorporating part-of-speech (POS) tags into the RNNLM as an additional feature to predict the next word BIBREF5, BIBREF6. Other useful linguistic features include conversation-type, which was shown to improve language modeling when combined with POS tags BIBREF7. Further improvements were achieved through the addition of socio-situational setting information and other linguistic features such as lemmas and topic BIBREF8.
The use of topic information to provide semantic context to language models has also been studied extensively BIBREF9, BIBREF10, BIBREF11, BIBREF12. Topic models are useful for extracting high level semantic structure via latent topics which can aid in better modeling of longer documents.
Recently, however, empirical studies involving investigation of different network architectures, hyper-parameter tuning, and optimization techniques have yielded better performance than the addition of contextual information BIBREF13, BIBREF14. In contrast to the majority of work that focus on improving the neural network aspects of RNNLM, we introduce psycholinguistic signals along with linguistic units to improve the fundamental language model.
In this work, we utilize behavioral information embedded in the language to aid the language model. We hypothesize that different psychological behavior states incite differences in the use of language BIBREF15, BIBREF16, and thus modeling these tendencies can provide useful information in statistical language modeling. And although not directly related, behavioral information may also correlate with conversation-type and topic. Thus, we propose the use of psycholinguistic behavior signals as a gating mechanism to augment typical language models. Effectively inferring behaviors from sources like spoken text, written articles can lead to personification of the language models in the speaker-writer arena.
<<</Introduction>>>
<<<Methodology>>>
In this section, we first describe a typical RNN based language model which serves as a baseline for this study. Second, we introduce the proposed behavior prediction model for extracting behavioral information. Finally, the proposed architecture of the language model which incorporates the behavioral information through a gating mechanism is presented.
<<<Language Model>>>
The basic RNNLM consists of a vanilla unidirectional LSTM which predicts the next word given the current and its word history at each time step. In other words, given a sequence of words $ \mathbf {x} \hspace{2.77771pt}{=}\hspace{2.77771pt}x_1, x_2, \ldots x_n$ as input, the network predicts a probability distribution of the next word $ y $ as $ P(y \mid \mathbf {x}) $. Figure FIGREF2 illustrates the basic architecture of the RNNLM.
Since our contribution is towards introducing behavior as a psycholinguistic feature for aiding the language modeling process, we stick with a reliable and simple LSTM-based RNN model and follow the recommendations from BIBREF1 for our baseline model.
<<</Language Model>>>
<<<Behavior Model>>>
The analysis and processing of human behavior informatics is crucial in many psychotherapy settings such as observational studies and patient therapy BIBREF17. Prior work has proposed the application of neural networks in modeling human behavior in a variety of clinical settings BIBREF18, BIBREF19, BIBREF20.
In this work we adopt a behavior model that predicts the likelihood of occurrence of various behaviors based on input text. Our model is based on the RNN architecture in Figure FIGREF2, but instead of the next word we predict the joint probability of behavior occurrences $ P(\mathbf {B} \mid \mathbf {x}) $ where $ \mathbf {B} \hspace{2.77771pt}{=}\hspace{2.77771pt}\lbrace b_{i}\rbrace $ and $ b_{i} $ is the occurrence of behavior $i$. In this work we apply the behaviors: Acceptance, Blame, Negativity, Positivity, and Sadness. This is elaborated more on in Section SECREF3.
<<</Behavior Model>>>
<<<Behavior Gated Language Model>>>
<<<Motivation>>>
Behavior understanding encapsulates a long-term trajectory of a person's psychological state. Through the course of communication, these states may manifest as short-term instances of emotion or sentiment. Previous work has studied the links between these psychological states and their effect on vocabulary and choice of words BIBREF15 as well as use of language BIBREF16. Motivated from these studies, we hypothesize that due to the duality of behavior and language we can improve language models by capturing variability in language use caused by different psychological states through the inclusion of behavioral information.
<<</Motivation>>>
<<<Proposed Model>>>
We propose to augment RNN language models with a behavior model that provides information relating to a speaker's psychological state. This behavioral information is combined with hidden layers of the RNNLM through a gating mechanism prior to output prediction of the next word. In contrast to typical language models, we propose to model $ P(\mathbf {y} \mid \mathbf {x}, \mathbf {z}) $ where $ \mathbf {z} \equiv f( P(\mathbf {B}\mid \mathbf {x}))$ for an RNN function $f(\cdot )$. The behavior model is implemented with a multi-layered RNN over the input sequence of words. The first recurrent layer of the behavior model is initialized with pre-trained weights from the model described in Section SECREF3 and fixed during language modeling training. An overview of the proposed behavior gated language model is shown in Figure FIGREF6. The RNN units shaded in green (lower section) denote the pre-trained weights from the behavior classification model which are fixed during the entirety of training. The abstract behavior outputs $ b_t $ of the pre-trained model are fed into a time-synced RNN, denoted in blue (upper section), which is subsequently used for gating the RNNLM predictions. The un-shaded RNN units correspond to typical RNNLM and operate in parallel to the former.
<<</Proposed Model>>>
<<</Behavior Gated Language Model>>>
<<</Methodology>>>
<<<Experimental Setup>>>
<<<Data>>>
<<<Behavior Related Corpora>>>
For evaluating the proposed model on behavior related data, we employ the Couples Therapy Corpus (CoupTher) BIBREF21 and Cancer Couples Interaction Dataset (Cancer) BIBREF22. These are the targeted conditions under which a behavior-gated language model can offer improved performance.
Couples Therapy Corpus: This corpus comprises of dyadic conversations between real couples seeking marital counseling. The dataset consists of audio, video recordings along with their transcriptions. Each speaker is rated by multiple annotators over 33 behaviors. The dataset comprises of approximately 0.83 million words with 10,000 unique entries of which 0.5 million is used for training (0.24m for dev and 88k for test).
Cancer Couples Interaction Dataset: This dataset was gathered as part of a observational study of couples coping with advanced cancer. Advanced cancer patients and their spouse caregivers were recruited from clinics and asked to interact with each other in two structured discussions: neutral discussion and cancer related. Interactions were audio-recorded using small digital recorders worn by each participant. Manually transcribed audio has approximately 230,000 word tokens with a vocabulary size of 8173.
<<<Couple's Therapy Corpus>>>
We utilize the Couple's Therapy Corpus as an in-domain experimental corpus since our behavior classification model is also trained on the same. The RNNLM architecture is similar to BIBREF1, but with hyperparameters optimized for the couple's corpus. The results are tabulated in Table TABREF16 in terms of perplexity. We find that the behavior gated language models yield lower perplexity compared to vanilla LSTM language model. A relative improvement of 2.43% is obtained with behavior gating on the couple's data.
<<</Couple's Therapy Corpus>>>
<<<Cancer Couples Interaction Dataset>>>
To evaluate the validity of the proposed method on an out-of-domain but behavior related task, we utilize the Cancer Couples Interaction Dataset. Here both the language and the behavior models are trained on the Couple's Therapy Corpus. The Cancer dataset is used only for development (hyper-parameter tuning) and testing. We observe that the behavior gating helps achieve lower perplexity values with a relative improvement of 6.81%. The performance improvements on an out-of-domain task emphasizes the effectiveness of behavior gated language models.
<<</Cancer Couples Interaction Dataset>>>
<<</Behavior Related Corpora>>>
<<<Penn Tree Bank Corpus>>>
In order to evaluate our proposed model on more generic language modeling tasks, we employ Penn Tree bank (PTB) BIBREF23, as preprocessed by BIBREF24. Since Penn Tree bank mainly comprises of articles from Wall Street Journal it is not expected to contain substantial expressions of behavior.
<<<Previous state-of-the-art architectures>>>
Finally we apply behavior gating on a previous state-of-the-art architecture, one that is most often used as a benchmark over various recent works. Specifically, we employ the AWD-LSTM proposed by BIBREF2 with QRNN BIBREF25 instead of LSTM. We observe positive results with AWD-LSTM augmented with behavior-gating providing a relative improvement of (1.42% on valid) 0.66% in perplexity (Table TABREF17).
<<</Previous state-of-the-art architectures>>>
<<</Penn Tree Bank Corpus>>>
<<</Data>>>
<<<Hyperparameters>>>
We augmented previous RNN language model architectures by BIBREF1 and BIBREF2 with our proposed behavior gates. We used the same architecture as in each work to maintain similar number of parameters and performed a grid search of hyperparameters such as learning rate, dropout, and batch size. The number of layers and size of the final layers of the behavior model was also optimized. We report the results of models based on the best validation result.
<<</Hyperparameters>>>
<<</Experimental Setup>>>
<<<Results>>>
We split the results into two parts. We first validate the proposed technique on behavior related language modeling tasks and then apply it on more generic domain Penn Tree bank dataset.
<<</Results>>>
<<<Conclusion & Future Work>>>
In this study, we introduce the state of the speaker/author into language modeling in the form of behavior signals. We track 5 behaviors namely acceptance, blame, negativity, positivity and sadness using a 5 class multi-label behavior classification model. The behavior states are used as gating mechanism for a typical RNN based language model. We show through our experiments that the proposed technique improves language modeling perplexity specifically in the case of behavior-rich scenarios. Finally, we show improvements on the previous state-of-the-art benchmark model with Penn Tree Bank Corpus to underline the affect of behavior states in language modeling.
In future, we plan to incorporate the behavior-gated language model into the task of automatic speech recognition (ASR). In such scenario, we could derive both the past and the future behavior states from the ASR which could then be used to gate the language model using two pass re-scoring strategies. We expect the behavior states to be less prone to errors made by ASR over a sufficiently long context and hence believe the future behavior states to provide further improvements.
<<</Conclusion & Future Work>>>
<<</Title>>>
|
{
"references": [
"Abstract, Introduction"
],
"type": "disordered_section"
}
|
2003.01006
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
The STEM-ECR Dataset: Grounding Scientific Entity References in STEM Scholarly Content to Authoritative Encyclopedic and Lexicographic Sources
<<<Abstract>>>
We introduce the STEM (Science, Technology, Engineering, and Medicine) Dataset for Scientific Entity Extraction, Classification, and Resolution, version 1.0 (STEM-ECR v1.0). The STEM-ECR v1.0 dataset has been developed to provide a benchmark for the evaluation of scientific entity extraction, classification, and resolution tasks in a domain-independent fashion. It comprises abstracts in 10 STEM disciplines that were found to be the most prolific ones on a major publishing platform. We describe the creation of such a multidisciplinary corpus and highlight the obtained findings in terms of the following features: 1) a generic conceptual formalism for scientific entities in a multidisciplinary scientific context; 2) the feasibility of the domain-independent human annotation of scientific entities under such a generic formalism; 3) a performance benchmark obtainable for automatic extraction of multidisciplinary scientific entities using BERT-based neural models; 4) a delineated 3-step entity resolution procedure for human annotation of the scientific entities via encyclopedic entity linking and lexicographic word sense disambiguation; and 5) human evaluations of Babelfy returned encyclopedic links and lexicographic senses for our entities. Our findings cumulatively indicate that human annotation and automatic learning of multidisciplinary scientific concepts as well as their semantic disambiguation in a wide-ranging setting as STEM is reasonable.
<<</Abstract>>>
<<<>>>
1.1em
<<</>>>
<<<Scientific Entity Annotations>>>
By starting with a STEM corpus of scholarly abstracts for annotating with scientific entities, we differ from existing work addressing this task since we are going beyond the domain restriction that so far seems to encompass scientific IE. For entity annotations, we rely on existing scientific concept formalisms BIBREF0, BIBREF1, BIBREF2 that appear to propose generic scientific concept types that can bridge the domains we consider, thereby offering a uniform entity selection framework. In the following subsections, we describe our annotation task in detail, after which we conclude with benchmark results.
<<<Our Annotation Process>>>
The corpus for computing inter-annotator agreement was annotated by two postdoctoral researchers in Computer Science. To develop annotation guidelines, a small pilot annotation exercise was performed on 10 abstracts (one per domain) with a set of surmised generically applicable scientific concepts such as Task, Process, Material, Object, Method, Data, Model, Results, etc., taken from existing work. Over the course of three annotation trials, these concepts were iteratively pruned where concepts that did not cover all domains were dropped, resulting in four finalized concepts, viz. Process, Method, Material, and Data as our resultant set of generic scientific concepts (see Table TABREF3 for their definitions). The subsequent annotation task entailed linguistic considerations for the precise selection of entities as one of the four scientific concepts based on their part-of-speech tag or phrase type. Process entities were verbs (e.g., “prune” in Agr), verb phrases (e.g., “integrating results” in Mat), or noun phrases (e.g. “this transport process” in Bio); Method entities comprised noun phrases containing phrase endings such as simulation, method, algorithm, scheme, technique, system, etc.; Material were nouns or noun phrases (e.g., “forest trees” in Agr, “electrons” in Ast or Che, “tephra” in ES); and majority of the Data entities were numbers otherwise noun phrases (e.g., “(2.5$\pm $1.5)kms$^{-1}$” representing a velocity value in Ast, “plant available P status” in Agr). Summarily, the resulting annotation guidelines hinged upon the following five considerations:
To ensure consistent scientific entity spans, entities were annotated as definite noun phrases where possible. In later stages, the extraneous determiners and articles could be dropped as deemed appropriate.
Coreferring lexical units for scientific entities in the context of a single abstract were annotated with the same concept type.
Quantifiable lexical units such as numbers (e.g., years 1999, measurements 4km) or even as phrases (e.g., vascular risk) were annotated as Data.
Where possible, the most precise text reference (i.e., phrases with qualifiers) regarding materials used in the experiment were annotated. For instance, “carbon atoms in graphene” was annotated as a single Material entity and not separately as “carbon atoms,” “graphene.”
Any confusion in classifying scientific entities as one of four types was resolved using the following concept precedence: Method $>$ Process $>$ Data $>$ Material, where the concept appearing earlier in the list was preferred.
After finalizing the concepts and updating the guidelines, the final annotation task proceeded in two phases
In phase I, five abstracts per domain (i.e. 50 abstracts) were annotated by both annotators and the inter-annotator agreement was computed using Cohen's $\kappa $ BIBREF4. Results showed a moderate inter-annotator agreement at 0.52 $\kappa $.
Next, in phase II, one of the annotators interviewed subject specialists in each of the ten domains about the choice of concepts and her annotation decisions on their respective domain corpus. The feedback from the interviews were systematically categorized into error types and these errors were discussed by both annotators. Following these discussions, the 50 abstracts from phase I were independently reannotated. The annotators could obtain substantial overall agreement of 0.76 $\kappa $ after phase II.
In Table TABREF16, we report the IAA scores obtained per domain and overall. The scores show that the annotators had a substantial agreement in seven domains, while only a moderate agreement was reached in three domains, viz. Agr, Mat, and Ast.
<<<Annotation Error Analysis>>>
We discuss some of the changes the interviewer annotator made in phase II after consultation with the subject experts.
In total, 21% of the phase I annotations were changed: Process accounted for a major proportion (nearly 54%) of the changes. Considerable inconsistency was found in annotating verbs like “increasing”, “decreasing”, “enhancing”, etc., as Process or not. Interviews with subject experts confirmed that they were a relevant detail to the research investigation and hence should be annotated. So 61% of the Process changes came from additionally annotating these verbs. Material was the second predominantly changed concept in phase II, accounting for 23% of the overall changes. Nearly 32% of the changes under Material came from consistently reannotating phrases about models, tools, and systems; accounting for another 22% of its changes, where spatial locations were an essential part of the investigation such as in the Ast and ES domains, they were decided to be included in the phase II set as Material. Finally, there were some changes that emerged from lack of domain expertise. This was mainly in the medical domain (4.3% of the overall changes) in resolving confusion in annotating Process and Method concept types. Most of the remaining changes were based on the treatment of conjunctive spans or lists.
Subsequently, the remaining 60 abstracts (six per domain) were annotated by one annotator. This last phase also involved reconciliation of the earlier annotated 50 abstracts to obtain a gold standard corpus.
<<</Annotation Error Analysis>>>
<<<Annotated Corpus Characteristics>>>
Table TABREF17 shows our annotated corpus characteristics. Our corpus comprises a total of 6,127 scientific entities, including 2,112 Process, 258 Method, 2,099 Material, and 1,658 Data entities. The number of entities per abstract directly correlates with the length of the abstracts (Pearson's R 0.97). Among the concepts, Process and Material directly correlate with abstract length (R 0.8 and 0.83, respectively), while Data has only a slight correlation (R 0.35) and Method has no correlation (R 0.02).
In Figure FIGREF18, we show an example instance of a manually created text graph from the scientific entities in one abstract. The graph highlights that linguistic relations such as synonymy, hypernymy, meronymy, as well as OpenIE relations are poignant even between scientific entities.
<<</Annotated Corpus Characteristics>>>
<<<Annotation Task Tools>>>
During the annotation procedure, each annotator was shown the entities, grouped by domain and file name, in Google Excel Sheet columns alongside a view of the current abstract of entities being annotated in the BRAT interface stenetorp2012brat for context information about the entities. For entity resolution, i.e. linking and disambiguation, the annotators had local installations of specific time-stamped Wikipedia and Wiktionary dumps to enable future persistent references to the links since the Wiki sources are actively revised. They queried the local dumps using the DKPro JWPL tool BIBREF8 for Wikipedia and the DKPro JWKTL tool BIBREF9 for Wiktionary, where both tools enable optimized search through the large Wiki data volume.
<<</Annotation Task Tools>>>
<<<Annotation Procedure for Entity Resolution>>>
Through iterative pilot annotation trials on the same pilot dataset as before, the annotators delineated an ordered annotation procedure depicted in the flowchart in Figure FIGREF28. There are two main annotation phases, viz. a preprocessing phase (determining linkability, determining whether an entity is decomposable into shorter collocations), and the entity resolution phase.
The actual annotation task then proceeded, in which to compute agreement scores, the annotators worked on the same set of 50 scholarly abstracts that they had used earlier to compute the scores for the scientific entity annotations.
<<<Linkability.>>>
In this first step, entities that conveyed a sense of scientific jargon were deemed linkable.
A natural question that arises, in the context of the Linkability criteria, is: Which stage 1 annotated scientific entities were now deemed unlinkable? They were 1) Data entities that are numbers; 2) entities that are coreference mentions which, as isolated units, lost their precise sense (e.g., “development”); and 3) Process verbs (e.g., “decreasing”, “reconstruct”, etc.). Still, having identified these cases, a caveat remained: except for entities of type Data, the remaining decisions made in this step involved a certain degree of subjectivity because, for instance, not all Process verbs were unlinkable (e.g., “flooding”). Nonetheless, at the end of this step, the annotators obtained a high IAA score at 0.89 $\kappa $. From the agreement scores, we found that the Linkability decisions could be made reliably and consistently on the data.
<<</Linkability.>>>
<<<Splitting phrases into shorter collocations.>>>
While preference was given to annotating non-compositional noun phrases as scientific entities in stage 1, consecutive occurrences of entities of the same concept type separated only by prepositions or conjunctions were merged into longer spans. As examples, consider the phrases “geysers on south polar region,” and “plume of water ice molecules and dust” in Figure FIGREF18. These phrases, respectively, can be meaningfully split as “geysers” and “south polar region” for the first example, and “plume”, “water ice molecules”, and “dust” for the second. As demonstrated in these examples, the stage 1 entities we split in this step are syntactically-flexible multi-word expressions which did not have a strict constraint on composition BIBREF10. For such expressions, we query Wikipedia or Google to identify their splits judging from the number of results returned and whether, in the results, the phrases appeared in authoritative sources (e.g., as overview topics in publishing platforms such as ScienceDirect). Since search engines operate on a vast amount of data, they are a reliable source for determining phrases with a strong statistical regularity, i.e. determining collocations.
With a focus on obtaining agreement scores for entity resolution, the annotators bypass this stage for computing independent agreement and attempted it mutually as follows. One annotator determined all splits, wherever required, first. The second annotator acted as judge by going through all the splits and proposed new splits in case of disagreement. The disagreements were discussed by both annotators and the previous steps were repeated iteratively until the dataset was uniformly split. After this stage, both annotators have the same set of entities for resolution.
<<</Splitting phrases into shorter collocations.>>>
<<<Entity Resolution (ER) Annotation.>>>
In this stage, the annotators resolved each entity from the previous step to encyclopedic and lexicographic knowledge bases. While, in principle, multiple knowledge sources can be leveraged, this study only examines scientific entities in the context of their Wiki-linkability.
Wikipedia, as the largest online encyclopedia (with nearly 5.9 million English articles) offers a wide coverage of real-world entities, and based on its vast community of editors with editing patterns at the rate of 1.8 edits per second, is considered a reliable source of information. It is pervasively adopted in automatic EL tasks BIBREF11, BIBREF12, BIBREF13 to disambiguate the names of people, places, organizations, etc., to their real-world identities. We shift from this focus on proper names as the traditional Wikification EL purpose has been, to its, thus far, seemingly less tapped-in conceptual encyclopedic knowledge of nominal scientific entities.
Wiktionary is the largest freely available dictionary resource. Owing to its vast community of curators, it rivals the traditional expert-curated lexicographic resource WordNet BIBREF14 in terms of coverage and updates, where the latter evolves more slowly. For English, Wiktionary has nine times as many entries and at least five times as many senses compared to WordNet. As a more pertinent neologism in the context of our STEM data, consider the sense of term “dropout” as a method for regularizing the neural network algorithms which is already present in Wiktionary. While WSD has been traditionally used WordNet for its high-quality semantic network and longer prevalence in the linguistics community (c.f Navigli navigli2009word for a comprehensive survey), we adopt Wiktionary thus maintaining our focus on collaboratively curated resources.
In WSD, entities from all parts-of-speech are enriched w.r.t. language and wordsmithing. But it excludes in-depth factual and encyclopedic information, which otherwise is contained in Wikipedia. Thus, Wikipedia and Wiktionary are viewed as largely complementary.
<<</Entity Resolution (ER) Annotation.>>>
<<<ER Annotation Task formalism.>>>
Given a scholarly abstract $A$ comprising a set of entities $E = \lbrace e_{1}, ... ,e_{N}\rbrace $, the annotation goal is to produce a mapping from $E$ to a set of Wikipedia pages ($p_1,...,p_N$) and Wiktionary senses ($s_1,...,s_N$) as $R = \lbrace (p_1,s_1), ... , (p_N,s_N)\rbrace $. For entities without a mapping, the corresponding $p$ or $s$ refers to Nil.
The annotators followed comprehensive guidelines for ER including exceptions. E.g., the conjunctive phrase “acid/alkaline phosphatase activity” was semantically treated as the following two phrases “acid phosphatase activity” or “alkaline phosphatase activity” for EL, however, in the text it was retained as “acid” and “alkaline phosphatase activity.” Since WSD is performed over exact word-forms without assuming any semantic extension, it was not performed for “acid.” Annotations were also made for complex forms of reference such as meronymy (e.g., space instrument “CAPS” to spacecraft “wiki:Cassini Huygens” of which it is a part), or hypernymy (e.g., “parents” in “genepool parents” to “wiki:Ancestor”). As a result of the annotation task, the annotators obtained 82.87% rate of agreement in the EL task and a $\kappa $ score of 0.86 in the WSD task. Contrary to WSD expectations as a challenging linguistics task BIBREF15, we show high agreement; this we attribute to the entities' direct scientific sense and availability in Wiktionary (e.g., “dropout”).
Subsequently, the ER annotation for the remaining 60 abstracts (six per domain) were performed by one annotator. This last phase also involved reconciliation of the earlier annotated 50 abstracts to obtain a gold standard corpus.
<<</ER Annotation Task formalism.>>>
<<</Annotation Procedure for Entity Resolution>>>
<<</Our Annotation Process>>>
<<<Performance Benchmark>>>
In the second stage of the study, we perform word sense disambiguation and link our entities to authoritative sources.
<<</Performance Benchmark>>>
<<</Scientific Entity Annotations>>>
<<<Scientific Entity Resolution>>>
Aside from the four scientific concepts facilitating a common understanding of scientific entities in a multidisciplinary setting, the fact that they are just four made the human annotation task feasible. Utilizing additional concepts would have resulted in a prohibitively expensive human annotation task. Nevertheless, there are existing datasets (particularly in the biomedical domain, e.g., GENIA BIBREF6) that have adopted the conceptual framework in rich domain-specific semantic ontologies. Our work, while related, is different since we target the annotation of multidisciplinary scientific entities that facilitates a low annotation entrance barrier to producing such data. This is beneficial since it enables the task to be performed in a domain-independent manner by researchers, but perhaps not crowdworkers, unless screening tests for a certain level of scientific expertise are created.
Nonetheless, we recognize that the four categories might be too limiting for real-world usage. Further, the scientific entities from stage 1 remain susceptible to subjective interpretation without additional information. Therefore, in a similar vein to adopting domain-specific ontologies, we now perform entity linking (EL) to the Wikipedia and word sense disambiguation (WSD) to Wiktionary.
<<<Evaluation>>>
We do not observe a significant impact of the long-tailed list phenomenon of unresolved entities in our data (c.f Table TABREF36 only 17% did not have EL annotations). Results on more recent publications should perhaps serve more conclusive in this respect for new concepts introduced–the abstracts in our dataset were published between 2012 and 2014.
<<</Evaluation>>>
<<</Scientific Entity Resolution>>>
<<<Conclusion>>>
The STEM-ECR v1.0 corpus of scientific abstracts offers multidisciplinary Process, Method, Material, and Data entities that are disambiguated using Wiki-based encyclopedic and lexicographic sources thus facilitating links between scientific publications and real-world knowledge (see the concepts enrichment we obtain from Wikipedia for our entities in Figure ). We have found that these Wikipedia categories do enable a semantic enrichment of our entities over our generic four concept formalism as Process, Material, Method, and Data (as an illustration, the top 30 Wiki categories for each of our four generic concept types are shown in the Appendix). Further, considering the various domains in our multidisciplinary STEM corpus, notably, the inclusion of understudied domains like Mathematics, Astronomy, Earth Science, and Material Science makes our corpus particularly unique w.r.t. the investigation of their scientific entities. This is a step toward exploring domain independence in scientific IE. Our corpus can be leveraged for machine learning experiments in several settings: as a vital active-learning test-bed for curating more varied entity representations BIBREF16; to explore domain-independence versus domain-dependence aspects in scientific IE; for EL and WSD extensions to other ontologies or lexicographic sources; and as a knowledge resource to train a reading machine (such as PIKES BIBREF17 or FRED BIBREF18) that generate more knowledge from massive streams of interdisciplinary scientific articles. We plan to extend this corpus with relations to enable building knowledge representation models such as knowledge graphs in a domain-independent manner.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Conclusion, Scientific Entity Resolution"
],
"type": "disordered_section"
}
|
2003.01006
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
The STEM-ECR Dataset: Grounding Scientific Entity References in STEM Scholarly Content to Authoritative Encyclopedic and Lexicographic Sources
<<<Abstract>>>
We introduce the STEM (Science, Technology, Engineering, and Medicine) Dataset for Scientific Entity Extraction, Classification, and Resolution, version 1.0 (STEM-ECR v1.0). The STEM-ECR v1.0 dataset has been developed to provide a benchmark for the evaluation of scientific entity extraction, classification, and resolution tasks in a domain-independent fashion. It comprises abstracts in 10 STEM disciplines that were found to be the most prolific ones on a major publishing platform. We describe the creation of such a multidisciplinary corpus and highlight the obtained findings in terms of the following features: 1) a generic conceptual formalism for scientific entities in a multidisciplinary scientific context; 2) the feasibility of the domain-independent human annotation of scientific entities under such a generic formalism; 3) a performance benchmark obtainable for automatic extraction of multidisciplinary scientific entities using BERT-based neural models; 4) a delineated 3-step entity resolution procedure for human annotation of the scientific entities via encyclopedic entity linking and lexicographic word sense disambiguation; and 5) human evaluations of Babelfy returned encyclopedic links and lexicographic senses for our entities. Our findings cumulatively indicate that human annotation and automatic learning of multidisciplinary scientific concepts as well as their semantic disambiguation in a wide-ranging setting as STEM is reasonable.
<<</Abstract>>>
<<<>>>
1.1em
<<</>>>
<<<Scientific Entity Annotations>>>
By starting with a STEM corpus of scholarly abstracts for annotating with scientific entities, we differ from existing work addressing this task since we are going beyond the domain restriction that so far seems to encompass scientific IE. For entity annotations, we rely on existing scientific concept formalisms BIBREF0, BIBREF1, BIBREF2 that appear to propose generic scientific concept types that can bridge the domains we consider, thereby offering a uniform entity selection framework. In the following subsections, we describe our annotation task in detail, after which we conclude with benchmark results.
<<<Our Annotation Process>>>
The corpus for computing inter-annotator agreement was annotated by two postdoctoral researchers in Computer Science. To develop annotation guidelines, a small pilot annotation exercise was performed on 10 abstracts (one per domain) with a set of surmised generically applicable scientific concepts such as Task, Process, Material, Object, Method, Data, Model, Results, etc., taken from existing work. Over the course of three annotation trials, these concepts were iteratively pruned where concepts that did not cover all domains were dropped, resulting in four finalized concepts, viz. Process, Method, Material, and Data as our resultant set of generic scientific concepts (see Table TABREF3 for their definitions). The subsequent annotation task entailed linguistic considerations for the precise selection of entities as one of the four scientific concepts based on their part-of-speech tag or phrase type. Process entities were verbs (e.g., “prune” in Agr), verb phrases (e.g., “integrating results” in Mat), or noun phrases (e.g. “this transport process” in Bio); Method entities comprised noun phrases containing phrase endings such as simulation, method, algorithm, scheme, technique, system, etc.; Material were nouns or noun phrases (e.g., “forest trees” in Agr, “electrons” in Ast or Che, “tephra” in ES); and majority of the Data entities were numbers otherwise noun phrases (e.g., “(2.5$\pm $1.5)kms$^{-1}$” representing a velocity value in Ast, “plant available P status” in Agr). Summarily, the resulting annotation guidelines hinged upon the following five considerations:
To ensure consistent scientific entity spans, entities were annotated as definite noun phrases where possible. In later stages, the extraneous determiners and articles could be dropped as deemed appropriate.
Coreferring lexical units for scientific entities in the context of a single abstract were annotated with the same concept type.
Quantifiable lexical units such as numbers (e.g., years 1999, measurements 4km) or even as phrases (e.g., vascular risk) were annotated as Data.
Where possible, the most precise text reference (i.e., phrases with qualifiers) regarding materials used in the experiment were annotated. For instance, “carbon atoms in graphene” was annotated as a single Material entity and not separately as “carbon atoms,” “graphene.”
Any confusion in classifying scientific entities as one of four types was resolved using the following concept precedence: Method $>$ Process $>$ Data $>$ Material, where the concept appearing earlier in the list was preferred.
After finalizing the concepts and updating the guidelines, the final annotation task proceeded in two phases
In phase I, five abstracts per domain (i.e. 50 abstracts) were annotated by both annotators and the inter-annotator agreement was computed using Cohen's $\kappa $ BIBREF4. Results showed a moderate inter-annotator agreement at 0.52 $\kappa $.
Next, in phase II, one of the annotators interviewed subject specialists in each of the ten domains about the choice of concepts and her annotation decisions on their respective domain corpus. The feedback from the interviews were systematically categorized into error types and these errors were discussed by both annotators. Following these discussions, the 50 abstracts from phase I were independently reannotated. The annotators could obtain substantial overall agreement of 0.76 $\kappa $ after phase II.
In Table TABREF16, we report the IAA scores obtained per domain and overall. The scores show that the annotators had a substantial agreement in seven domains, while only a moderate agreement was reached in three domains, viz. Agr, Mat, and Ast.
<<<Annotation Error Analysis>>>
We discuss some of the changes the interviewer annotator made in phase II after consultation with the subject experts.
In total, 21% of the phase I annotations were changed: Process accounted for a major proportion (nearly 54%) of the changes. Considerable inconsistency was found in annotating verbs like “increasing”, “decreasing”, “enhancing”, etc., as Process or not. Interviews with subject experts confirmed that they were a relevant detail to the research investigation and hence should be annotated. So 61% of the Process changes came from additionally annotating these verbs. Material was the second predominantly changed concept in phase II, accounting for 23% of the overall changes. Nearly 32% of the changes under Material came from consistently reannotating phrases about models, tools, and systems; accounting for another 22% of its changes, where spatial locations were an essential part of the investigation such as in the Ast and ES domains, they were decided to be included in the phase II set as Material. Finally, there were some changes that emerged from lack of domain expertise. This was mainly in the medical domain (4.3% of the overall changes) in resolving confusion in annotating Process and Method concept types. Most of the remaining changes were based on the treatment of conjunctive spans or lists.
Subsequently, the remaining 60 abstracts (six per domain) were annotated by one annotator. This last phase also involved reconciliation of the earlier annotated 50 abstracts to obtain a gold standard corpus.
<<</Annotation Error Analysis>>>
<<<Annotated Corpus Characteristics>>>
Table TABREF17 shows our annotated corpus characteristics. Our corpus comprises a total of 6,127 scientific entities, including 2,112 Process, 258 Method, 2,099 Material, and 1,658 Data entities. The number of entities per abstract directly correlates with the length of the abstracts (Pearson's R 0.97). Among the concepts, Process and Material directly correlate with abstract length (R 0.8 and 0.83, respectively), while Data has only a slight correlation (R 0.35) and Method has no correlation (R 0.02).
In Figure FIGREF18, we show an example instance of a manually created text graph from the scientific entities in one abstract. The graph highlights that linguistic relations such as synonymy, hypernymy, meronymy, as well as OpenIE relations are poignant even between scientific entities.
<<</Annotated Corpus Characteristics>>>
<<<Annotation Task Tools>>>
During the annotation procedure, each annotator was shown the entities, grouped by domain and file name, in Google Excel Sheet columns alongside a view of the current abstract of entities being annotated in the BRAT interface stenetorp2012brat for context information about the entities. For entity resolution, i.e. linking and disambiguation, the annotators had local installations of specific time-stamped Wikipedia and Wiktionary dumps to enable future persistent references to the links since the Wiki sources are actively revised. They queried the local dumps using the DKPro JWPL tool BIBREF8 for Wikipedia and the DKPro JWKTL tool BIBREF9 for Wiktionary, where both tools enable optimized search through the large Wiki data volume.
<<</Annotation Task Tools>>>
<<<Annotation Procedure for Entity Resolution>>>
Through iterative pilot annotation trials on the same pilot dataset as before, the annotators delineated an ordered annotation procedure depicted in the flowchart in Figure FIGREF28. There are two main annotation phases, viz. a preprocessing phase (determining linkability, determining whether an entity is decomposable into shorter collocations), and the entity resolution phase.
The actual annotation task then proceeded, in which to compute agreement scores, the annotators worked on the same set of 50 scholarly abstracts that they had used earlier to compute the scores for the scientific entity annotations.
<<<Linkability.>>>
In this first step, entities that conveyed a sense of scientific jargon were deemed linkable.
A natural question that arises, in the context of the Linkability criteria, is: Which stage 1 annotated scientific entities were now deemed unlinkable? They were 1) Data entities that are numbers; 2) entities that are coreference mentions which, as isolated units, lost their precise sense (e.g., “development”); and 3) Process verbs (e.g., “decreasing”, “reconstruct”, etc.). Still, having identified these cases, a caveat remained: except for entities of type Data, the remaining decisions made in this step involved a certain degree of subjectivity because, for instance, not all Process verbs were unlinkable (e.g., “flooding”). Nonetheless, at the end of this step, the annotators obtained a high IAA score at 0.89 $\kappa $. From the agreement scores, we found that the Linkability decisions could be made reliably and consistently on the data.
<<</Linkability.>>>
<<<Splitting phrases into shorter collocations.>>>
While preference was given to annotating non-compositional noun phrases as scientific entities in stage 1, consecutive occurrences of entities of the same concept type separated only by prepositions or conjunctions were merged into longer spans. As examples, consider the phrases “geysers on south polar region,” and “plume of water ice molecules and dust” in Figure FIGREF18. These phrases, respectively, can be meaningfully split as “geysers” and “south polar region” for the first example, and “plume”, “water ice molecules”, and “dust” for the second. As demonstrated in these examples, the stage 1 entities we split in this step are syntactically-flexible multi-word expressions which did not have a strict constraint on composition BIBREF10. For such expressions, we query Wikipedia or Google to identify their splits judging from the number of results returned and whether, in the results, the phrases appeared in authoritative sources (e.g., as overview topics in publishing platforms such as ScienceDirect). Since search engines operate on a vast amount of data, they are a reliable source for determining phrases with a strong statistical regularity, i.e. determining collocations.
With a focus on obtaining agreement scores for entity resolution, the annotators bypass this stage for computing independent agreement and attempted it mutually as follows. One annotator determined all splits, wherever required, first. The second annotator acted as judge by going through all the splits and proposed new splits in case of disagreement. The disagreements were discussed by both annotators and the previous steps were repeated iteratively until the dataset was uniformly split. After this stage, both annotators have the same set of entities for resolution.
<<</Splitting phrases into shorter collocations.>>>
<<<Entity Resolution (ER) Annotation.>>>
In this stage, the annotators resolved each entity from the previous step to encyclopedic and lexicographic knowledge bases. While, in principle, multiple knowledge sources can be leveraged, this study only examines scientific entities in the context of their Wiki-linkability.
Wikipedia, as the largest online encyclopedia (with nearly 5.9 million English articles) offers a wide coverage of real-world entities, and based on its vast community of editors with editing patterns at the rate of 1.8 edits per second, is considered a reliable source of information. It is pervasively adopted in automatic EL tasks BIBREF11, BIBREF12, BIBREF13 to disambiguate the names of people, places, organizations, etc., to their real-world identities. We shift from this focus on proper names as the traditional Wikification EL purpose has been, to its, thus far, seemingly less tapped-in conceptual encyclopedic knowledge of nominal scientific entities.
Wiktionary is the largest freely available dictionary resource. Owing to its vast community of curators, it rivals the traditional expert-curated lexicographic resource WordNet BIBREF14 in terms of coverage and updates, where the latter evolves more slowly. For English, Wiktionary has nine times as many entries and at least five times as many senses compared to WordNet. As a more pertinent neologism in the context of our STEM data, consider the sense of term “dropout” as a method for regularizing the neural network algorithms which is already present in Wiktionary. While WSD has been traditionally used WordNet for its high-quality semantic network and longer prevalence in the linguistics community (c.f Navigli navigli2009word for a comprehensive survey), we adopt Wiktionary thus maintaining our focus on collaboratively curated resources.
In WSD, entities from all parts-of-speech are enriched w.r.t. language and wordsmithing. But it excludes in-depth factual and encyclopedic information, which otherwise is contained in Wikipedia. Thus, Wikipedia and Wiktionary are viewed as largely complementary.
<<</Entity Resolution (ER) Annotation.>>>
<<<ER Annotation Task formalism.>>>
Given a scholarly abstract $A$ comprising a set of entities $E = \lbrace e_{1}, ... ,e_{N}\rbrace $, the annotation goal is to produce a mapping from $E$ to a set of Wikipedia pages ($p_1,...,p_N$) and Wiktionary senses ($s_1,...,s_N$) as $R = \lbrace (p_1,s_1), ... , (p_N,s_N)\rbrace $. For entities without a mapping, the corresponding $p$ or $s$ refers to Nil.
The annotators followed comprehensive guidelines for ER including exceptions. E.g., the conjunctive phrase “acid/alkaline phosphatase activity” was semantically treated as the following two phrases “acid phosphatase activity” or “alkaline phosphatase activity” for EL, however, in the text it was retained as “acid” and “alkaline phosphatase activity.” Since WSD is performed over exact word-forms without assuming any semantic extension, it was not performed for “acid.” Annotations were also made for complex forms of reference such as meronymy (e.g., space instrument “CAPS” to spacecraft “wiki:Cassini Huygens” of which it is a part), or hypernymy (e.g., “parents” in “genepool parents” to “wiki:Ancestor”). As a result of the annotation task, the annotators obtained 82.87% rate of agreement in the EL task and a $\kappa $ score of 0.86 in the WSD task. Contrary to WSD expectations as a challenging linguistics task BIBREF15, we show high agreement; this we attribute to the entities' direct scientific sense and availability in Wiktionary (e.g., “dropout”).
Subsequently, the ER annotation for the remaining 60 abstracts (six per domain) were performed by one annotator. This last phase also involved reconciliation of the earlier annotated 50 abstracts to obtain a gold standard corpus.
<<</ER Annotation Task formalism.>>>
<<</Annotation Procedure for Entity Resolution>>>
<<</Our Annotation Process>>>
<<<Performance Benchmark>>>
In the second stage of the study, we perform word sense disambiguation and link our entities to authoritative sources.
<<</Performance Benchmark>>>
<<</Scientific Entity Annotations>>>
<<<Scientific Entity Resolution>>>
Aside from the four scientific concepts facilitating a common understanding of scientific entities in a multidisciplinary setting, the fact that they are just four made the human annotation task feasible. Utilizing additional concepts would have resulted in a prohibitively expensive human annotation task. Nevertheless, there are existing datasets (particularly in the biomedical domain, e.g., GENIA BIBREF6) that have adopted the conceptual framework in rich domain-specific semantic ontologies. Our work, while related, is different since we target the annotation of multidisciplinary scientific entities that facilitates a low annotation entrance barrier to producing such data. This is beneficial since it enables the task to be performed in a domain-independent manner by researchers, but perhaps not crowdworkers, unless screening tests for a certain level of scientific expertise are created.
Nonetheless, we recognize that the four categories might be too limiting for real-world usage. Further, the scientific entities from stage 1 remain susceptible to subjective interpretation without additional information. Therefore, in a similar vein to adopting domain-specific ontologies, we now perform entity linking (EL) to the Wikipedia and word sense disambiguation (WSD) to Wiktionary.
<<<Evaluation>>>
We do not observe a significant impact of the long-tailed list phenomenon of unresolved entities in our data (c.f Table TABREF36 only 17% did not have EL annotations). Results on more recent publications should perhaps serve more conclusive in this respect for new concepts introduced–the abstracts in our dataset were published between 2012 and 2014.
<<</Evaluation>>>
<<</Scientific Entity Resolution>>>
<<<Conclusion>>>
The STEM-ECR v1.0 corpus of scientific abstracts offers multidisciplinary Process, Method, Material, and Data entities that are disambiguated using Wiki-based encyclopedic and lexicographic sources thus facilitating links between scientific publications and real-world knowledge (see the concepts enrichment we obtain from Wikipedia for our entities in Figure ). We have found that these Wikipedia categories do enable a semantic enrichment of our entities over our generic four concept formalism as Process, Material, Method, and Data (as an illustration, the top 30 Wiki categories for each of our four generic concept types are shown in the Appendix). Further, considering the various domains in our multidisciplinary STEM corpus, notably, the inclusion of understudied domains like Mathematics, Astronomy, Earth Science, and Material Science makes our corpus particularly unique w.r.t. the investigation of their scientific entities. This is a step toward exploring domain independence in scientific IE. Our corpus can be leveraged for machine learning experiments in several settings: as a vital active-learning test-bed for curating more varied entity representations BIBREF16; to explore domain-independence versus domain-dependence aspects in scientific IE; for EL and WSD extensions to other ontologies or lexicographic sources; and as a knowledge resource to train a reading machine (such as PIKES BIBREF17 or FRED BIBREF18) that generate more knowledge from massive streams of interdisciplinary scientific articles. We plan to extend this corpus with relations to enable building knowledge representation models such as knowledge graphs in a domain-independent manner.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Conclusion, Scientific Entity Resolution"
],
"type": "disordered_section"
}
|
2003.01006
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
The STEM-ECR Dataset: Grounding Scientific Entity References in STEM Scholarly Content to Authoritative Encyclopedic and Lexicographic Sources
<<<Abstract>>>
We introduce the STEM (Science, Technology, Engineering, and Medicine) Dataset for Scientific Entity Extraction, Classification, and Resolution, version 1.0 (STEM-ECR v1.0). The STEM-ECR v1.0 dataset has been developed to provide a benchmark for the evaluation of scientific entity extraction, classification, and resolution tasks in a domain-independent fashion. It comprises abstracts in 10 STEM disciplines that were found to be the most prolific ones on a major publishing platform. We describe the creation of such a multidisciplinary corpus and highlight the obtained findings in terms of the following features: 1) a generic conceptual formalism for scientific entities in a multidisciplinary scientific context; 2) the feasibility of the domain-independent human annotation of scientific entities under such a generic formalism; 3) a performance benchmark obtainable for automatic extraction of multidisciplinary scientific entities using BERT-based neural models; 4) a delineated 3-step entity resolution procedure for human annotation of the scientific entities via encyclopedic entity linking and lexicographic word sense disambiguation; and 5) human evaluations of Babelfy returned encyclopedic links and lexicographic senses for our entities. Our findings cumulatively indicate that human annotation and automatic learning of multidisciplinary scientific concepts as well as their semantic disambiguation in a wide-ranging setting as STEM is reasonable.
<<</Abstract>>>
<<<>>>
1.1em
<<</>>>
<<<Scientific Entity Annotations>>>
By starting with a STEM corpus of scholarly abstracts for annotating with scientific entities, we differ from existing work addressing this task since we are going beyond the domain restriction that so far seems to encompass scientific IE. For entity annotations, we rely on existing scientific concept formalisms BIBREF0, BIBREF1, BIBREF2 that appear to propose generic scientific concept types that can bridge the domains we consider, thereby offering a uniform entity selection framework. In the following subsections, we describe our annotation task in detail, after which we conclude with benchmark results.
<<<Our Annotation Process>>>
The corpus for computing inter-annotator agreement was annotated by two postdoctoral researchers in Computer Science. To develop annotation guidelines, a small pilot annotation exercise was performed on 10 abstracts (one per domain) with a set of surmised generically applicable scientific concepts such as Task, Process, Material, Object, Method, Data, Model, Results, etc., taken from existing work. Over the course of three annotation trials, these concepts were iteratively pruned where concepts that did not cover all domains were dropped, resulting in four finalized concepts, viz. Process, Method, Material, and Data as our resultant set of generic scientific concepts (see Table TABREF3 for their definitions). The subsequent annotation task entailed linguistic considerations for the precise selection of entities as one of the four scientific concepts based on their part-of-speech tag or phrase type. Process entities were verbs (e.g., “prune” in Agr), verb phrases (e.g., “integrating results” in Mat), or noun phrases (e.g. “this transport process” in Bio); Method entities comprised noun phrases containing phrase endings such as simulation, method, algorithm, scheme, technique, system, etc.; Material were nouns or noun phrases (e.g., “forest trees” in Agr, “electrons” in Ast or Che, “tephra” in ES); and majority of the Data entities were numbers otherwise noun phrases (e.g., “(2.5$\pm $1.5)kms$^{-1}$” representing a velocity value in Ast, “plant available P status” in Agr). Summarily, the resulting annotation guidelines hinged upon the following five considerations:
To ensure consistent scientific entity spans, entities were annotated as definite noun phrases where possible. In later stages, the extraneous determiners and articles could be dropped as deemed appropriate.
Coreferring lexical units for scientific entities in the context of a single abstract were annotated with the same concept type.
Quantifiable lexical units such as numbers (e.g., years 1999, measurements 4km) or even as phrases (e.g., vascular risk) were annotated as Data.
Where possible, the most precise text reference (i.e., phrases with qualifiers) regarding materials used in the experiment were annotated. For instance, “carbon atoms in graphene” was annotated as a single Material entity and not separately as “carbon atoms,” “graphene.”
Any confusion in classifying scientific entities as one of four types was resolved using the following concept precedence: Method $>$ Process $>$ Data $>$ Material, where the concept appearing earlier in the list was preferred.
After finalizing the concepts and updating the guidelines, the final annotation task proceeded in two phases
In phase I, five abstracts per domain (i.e. 50 abstracts) were annotated by both annotators and the inter-annotator agreement was computed using Cohen's $\kappa $ BIBREF4. Results showed a moderate inter-annotator agreement at 0.52 $\kappa $.
Next, in phase II, one of the annotators interviewed subject specialists in each of the ten domains about the choice of concepts and her annotation decisions on their respective domain corpus. The feedback from the interviews were systematically categorized into error types and these errors were discussed by both annotators. Following these discussions, the 50 abstracts from phase I were independently reannotated. The annotators could obtain substantial overall agreement of 0.76 $\kappa $ after phase II.
In Table TABREF16, we report the IAA scores obtained per domain and overall. The scores show that the annotators had a substantial agreement in seven domains, while only a moderate agreement was reached in three domains, viz. Agr, Mat, and Ast.
<<<Annotation Error Analysis>>>
We discuss some of the changes the interviewer annotator made in phase II after consultation with the subject experts.
In total, 21% of the phase I annotations were changed: Process accounted for a major proportion (nearly 54%) of the changes. Considerable inconsistency was found in annotating verbs like “increasing”, “decreasing”, “enhancing”, etc., as Process or not. Interviews with subject experts confirmed that they were a relevant detail to the research investigation and hence should be annotated. So 61% of the Process changes came from additionally annotating these verbs. Material was the second predominantly changed concept in phase II, accounting for 23% of the overall changes. Nearly 32% of the changes under Material came from consistently reannotating phrases about models, tools, and systems; accounting for another 22% of its changes, where spatial locations were an essential part of the investigation such as in the Ast and ES domains, they were decided to be included in the phase II set as Material. Finally, there were some changes that emerged from lack of domain expertise. This was mainly in the medical domain (4.3% of the overall changes) in resolving confusion in annotating Process and Method concept types. Most of the remaining changes were based on the treatment of conjunctive spans or lists.
Subsequently, the remaining 60 abstracts (six per domain) were annotated by one annotator. This last phase also involved reconciliation of the earlier annotated 50 abstracts to obtain a gold standard corpus.
<<</Annotation Error Analysis>>>
<<<Annotated Corpus Characteristics>>>
Table TABREF17 shows our annotated corpus characteristics. Our corpus comprises a total of 6,127 scientific entities, including 2,112 Process, 258 Method, 2,099 Material, and 1,658 Data entities. The number of entities per abstract directly correlates with the length of the abstracts (Pearson's R 0.97). Among the concepts, Process and Material directly correlate with abstract length (R 0.8 and 0.83, respectively), while Data has only a slight correlation (R 0.35) and Method has no correlation (R 0.02).
In Figure FIGREF18, we show an example instance of a manually created text graph from the scientific entities in one abstract. The graph highlights that linguistic relations such as synonymy, hypernymy, meronymy, as well as OpenIE relations are poignant even between scientific entities.
<<</Annotated Corpus Characteristics>>>
<<<Annotation Task Tools>>>
During the annotation procedure, each annotator was shown the entities, grouped by domain and file name, in Google Excel Sheet columns alongside a view of the current abstract of entities being annotated in the BRAT interface stenetorp2012brat for context information about the entities. For entity resolution, i.e. linking and disambiguation, the annotators had local installations of specific time-stamped Wikipedia and Wiktionary dumps to enable future persistent references to the links since the Wiki sources are actively revised. They queried the local dumps using the DKPro JWPL tool BIBREF8 for Wikipedia and the DKPro JWKTL tool BIBREF9 for Wiktionary, where both tools enable optimized search through the large Wiki data volume.
<<</Annotation Task Tools>>>
<<<Annotation Procedure for Entity Resolution>>>
Through iterative pilot annotation trials on the same pilot dataset as before, the annotators delineated an ordered annotation procedure depicted in the flowchart in Figure FIGREF28. There are two main annotation phases, viz. a preprocessing phase (determining linkability, determining whether an entity is decomposable into shorter collocations), and the entity resolution phase.
The actual annotation task then proceeded, in which to compute agreement scores, the annotators worked on the same set of 50 scholarly abstracts that they had used earlier to compute the scores for the scientific entity annotations.
<<<Linkability.>>>
In this first step, entities that conveyed a sense of scientific jargon were deemed linkable.
A natural question that arises, in the context of the Linkability criteria, is: Which stage 1 annotated scientific entities were now deemed unlinkable? They were 1) Data entities that are numbers; 2) entities that are coreference mentions which, as isolated units, lost their precise sense (e.g., “development”); and 3) Process verbs (e.g., “decreasing”, “reconstruct”, etc.). Still, having identified these cases, a caveat remained: except for entities of type Data, the remaining decisions made in this step involved a certain degree of subjectivity because, for instance, not all Process verbs were unlinkable (e.g., “flooding”). Nonetheless, at the end of this step, the annotators obtained a high IAA score at 0.89 $\kappa $. From the agreement scores, we found that the Linkability decisions could be made reliably and consistently on the data.
<<</Linkability.>>>
<<<Splitting phrases into shorter collocations.>>>
While preference was given to annotating non-compositional noun phrases as scientific entities in stage 1, consecutive occurrences of entities of the same concept type separated only by prepositions or conjunctions were merged into longer spans. As examples, consider the phrases “geysers on south polar region,” and “plume of water ice molecules and dust” in Figure FIGREF18. These phrases, respectively, can be meaningfully split as “geysers” and “south polar region” for the first example, and “plume”, “water ice molecules”, and “dust” for the second. As demonstrated in these examples, the stage 1 entities we split in this step are syntactically-flexible multi-word expressions which did not have a strict constraint on composition BIBREF10. For such expressions, we query Wikipedia or Google to identify their splits judging from the number of results returned and whether, in the results, the phrases appeared in authoritative sources (e.g., as overview topics in publishing platforms such as ScienceDirect). Since search engines operate on a vast amount of data, they are a reliable source for determining phrases with a strong statistical regularity, i.e. determining collocations.
With a focus on obtaining agreement scores for entity resolution, the annotators bypass this stage for computing independent agreement and attempted it mutually as follows. One annotator determined all splits, wherever required, first. The second annotator acted as judge by going through all the splits and proposed new splits in case of disagreement. The disagreements were discussed by both annotators and the previous steps were repeated iteratively until the dataset was uniformly split. After this stage, both annotators have the same set of entities for resolution.
<<</Splitting phrases into shorter collocations.>>>
<<<Entity Resolution (ER) Annotation.>>>
In this stage, the annotators resolved each entity from the previous step to encyclopedic and lexicographic knowledge bases. While, in principle, multiple knowledge sources can be leveraged, this study only examines scientific entities in the context of their Wiki-linkability.
Wikipedia, as the largest online encyclopedia (with nearly 5.9 million English articles) offers a wide coverage of real-world entities, and based on its vast community of editors with editing patterns at the rate of 1.8 edits per second, is considered a reliable source of information. It is pervasively adopted in automatic EL tasks BIBREF11, BIBREF12, BIBREF13 to disambiguate the names of people, places, organizations, etc., to their real-world identities. We shift from this focus on proper names as the traditional Wikification EL purpose has been, to its, thus far, seemingly less tapped-in conceptual encyclopedic knowledge of nominal scientific entities.
Wiktionary is the largest freely available dictionary resource. Owing to its vast community of curators, it rivals the traditional expert-curated lexicographic resource WordNet BIBREF14 in terms of coverage and updates, where the latter evolves more slowly. For English, Wiktionary has nine times as many entries and at least five times as many senses compared to WordNet. As a more pertinent neologism in the context of our STEM data, consider the sense of term “dropout” as a method for regularizing the neural network algorithms which is already present in Wiktionary. While WSD has been traditionally used WordNet for its high-quality semantic network and longer prevalence in the linguistics community (c.f Navigli navigli2009word for a comprehensive survey), we adopt Wiktionary thus maintaining our focus on collaboratively curated resources.
In WSD, entities from all parts-of-speech are enriched w.r.t. language and wordsmithing. But it excludes in-depth factual and encyclopedic information, which otherwise is contained in Wikipedia. Thus, Wikipedia and Wiktionary are viewed as largely complementary.
<<</Entity Resolution (ER) Annotation.>>>
<<<ER Annotation Task formalism.>>>
Given a scholarly abstract $A$ comprising a set of entities $E = \lbrace e_{1}, ... ,e_{N}\rbrace $, the annotation goal is to produce a mapping from $E$ to a set of Wikipedia pages ($p_1,...,p_N$) and Wiktionary senses ($s_1,...,s_N$) as $R = \lbrace (p_1,s_1), ... , (p_N,s_N)\rbrace $. For entities without a mapping, the corresponding $p$ or $s$ refers to Nil.
The annotators followed comprehensive guidelines for ER including exceptions. E.g., the conjunctive phrase “acid/alkaline phosphatase activity” was semantically treated as the following two phrases “acid phosphatase activity” or “alkaline phosphatase activity” for EL, however, in the text it was retained as “acid” and “alkaline phosphatase activity.” Since WSD is performed over exact word-forms without assuming any semantic extension, it was not performed for “acid.” Annotations were also made for complex forms of reference such as meronymy (e.g., space instrument “CAPS” to spacecraft “wiki:Cassini Huygens” of which it is a part), or hypernymy (e.g., “parents” in “genepool parents” to “wiki:Ancestor”). As a result of the annotation task, the annotators obtained 82.87% rate of agreement in the EL task and a $\kappa $ score of 0.86 in the WSD task. Contrary to WSD expectations as a challenging linguistics task BIBREF15, we show high agreement; this we attribute to the entities' direct scientific sense and availability in Wiktionary (e.g., “dropout”).
Subsequently, the ER annotation for the remaining 60 abstracts (six per domain) were performed by one annotator. This last phase also involved reconciliation of the earlier annotated 50 abstracts to obtain a gold standard corpus.
<<</ER Annotation Task formalism.>>>
<<</Annotation Procedure for Entity Resolution>>>
<<</Our Annotation Process>>>
<<<Performance Benchmark>>>
In the second stage of the study, we perform word sense disambiguation and link our entities to authoritative sources.
<<</Performance Benchmark>>>
<<</Scientific Entity Annotations>>>
<<<Scientific Entity Resolution>>>
Aside from the four scientific concepts facilitating a common understanding of scientific entities in a multidisciplinary setting, the fact that they are just four made the human annotation task feasible. Utilizing additional concepts would have resulted in a prohibitively expensive human annotation task. Nevertheless, there are existing datasets (particularly in the biomedical domain, e.g., GENIA BIBREF6) that have adopted the conceptual framework in rich domain-specific semantic ontologies. Our work, while related, is different since we target the annotation of multidisciplinary scientific entities that facilitates a low annotation entrance barrier to producing such data. This is beneficial since it enables the task to be performed in a domain-independent manner by researchers, but perhaps not crowdworkers, unless screening tests for a certain level of scientific expertise are created.
Nonetheless, we recognize that the four categories might be too limiting for real-world usage. Further, the scientific entities from stage 1 remain susceptible to subjective interpretation without additional information. Therefore, in a similar vein to adopting domain-specific ontologies, we now perform entity linking (EL) to the Wikipedia and word sense disambiguation (WSD) to Wiktionary.
<<<Evaluation>>>
We do not observe a significant impact of the long-tailed list phenomenon of unresolved entities in our data (c.f Table TABREF36 only 17% did not have EL annotations). Results on more recent publications should perhaps serve more conclusive in this respect for new concepts introduced–the abstracts in our dataset were published between 2012 and 2014.
<<</Evaluation>>>
<<</Scientific Entity Resolution>>>
<<<Conclusion>>>
The STEM-ECR v1.0 corpus of scientific abstracts offers multidisciplinary Process, Method, Material, and Data entities that are disambiguated using Wiki-based encyclopedic and lexicographic sources thus facilitating links between scientific publications and real-world knowledge (see the concepts enrichment we obtain from Wikipedia for our entities in Figure ). We have found that these Wikipedia categories do enable a semantic enrichment of our entities over our generic four concept formalism as Process, Material, Method, and Data (as an illustration, the top 30 Wiki categories for each of our four generic concept types are shown in the Appendix). Further, considering the various domains in our multidisciplinary STEM corpus, notably, the inclusion of understudied domains like Mathematics, Astronomy, Earth Science, and Material Science makes our corpus particularly unique w.r.t. the investigation of their scientific entities. This is a step toward exploring domain independence in scientific IE. Our corpus can be leveraged for machine learning experiments in several settings: as a vital active-learning test-bed for curating more varied entity representations BIBREF16; to explore domain-independence versus domain-dependence aspects in scientific IE; for EL and WSD extensions to other ontologies or lexicographic sources; and as a knowledge resource to train a reading machine (such as PIKES BIBREF17 or FRED BIBREF18) that generate more knowledge from massive streams of interdisciplinary scientific articles. We plan to extend this corpus with relations to enable building knowledge representation models such as knowledge graphs in a domain-independent manner.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Conclusion, "
],
"type": "disordered_section"
}
|
1912.06927
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
#MeTooMA: Multi-Aspect Annotations of Tweets Related to the MeToo Movement
<<<Abstract>>>
In this paper, we present a dataset containing 9,973 tweets related to the MeToo movement that were manually annotated for five different linguistic aspects: relevance, stance, hate speech, sarcasm, and dialogue acts. We present a detailed account of the data collection and annotation processes. The annotations have a very high inter-annotator agreement (0.79 to 0.93 k-alpha) due to the domain expertise of the annotators and clear annotation instructions. We analyze the data in terms of geographical distribution, label correlations, and keywords. Lastly, we present some potential use cases of this dataset. We expect this dataset would be of great interest to psycholinguists, socio-linguists, and computational linguists to study the discursive space of digitally mobilized social movements on sensitive issues like sexual harassment.
<<</Abstract>>>
<<<Introduction>>>
Over the last couple of years, the MeToo movement has facilitated several discussions about sexual abuse. Social media, especially Twitter, was one of the leading platforms where people shared their experiences of sexual harassment, expressed their opinions, and also offered support to victims. A large portion of these tweets was tagged with a dedicated hashtag #MeToo, and it was one of the main trending topics in many countries. The movement was viral on social media and the hashtag used over 19 million times in a year.
The MeToo movement has been described as an essential development against the culture of sexual misconduct by many feminists, activists, and politicians. It is one of the primary examples of successful digital activism facilitated by social media platforms. The movement generated many conversations on stigmatized issues like sexual abuse and violence, which were not often discussed before because of the associated fear of shame or retaliation. This creates an opportunity for researchers to study how people express their opinion on a sensitive topic in an informal setting like social media. However, this is only possible if there are annotated datasets that explore different linguistic facets of such social media narratives.
Twitter served as a platform for many different types of narratives during the MeToo movement BIBREF0. It was used for sharing personal stories of abuse, offering support and resources to victims, and expressing support or opposition towards the movement BIBREF1. It was also used to allege individuals of sexual misconduct, refute such claims, and sometimes voice hateful or sarcastic comments about the campaign or individuals. In some cases, people also misused hashtag to share irrelevant or uninformative content. To capture all these complex narratives, we decided to curate a dataset of tweets related to the MeToo movement that is annotated for various linguistic aspects.
In this paper, we present a new dataset (MeTooMA) that contains 9,973 tweets associated with the MeToo movement annotated for relevance, stance, hate speech, sarcasm, and dialogue acts. We introduce and annotate three new dialogue acts that are specific to the movement: Allegation, Refutation, and Justification. The dataset also contains geographical information about the tweets: from which country it was posted.
We expect this dataset would be of great interest and use to both computational and socio-linguists. For computational linguists, it provides an opportunity to model three new complex dialogue acts (allegation, refutation, and justification) and also to study how these acts interact with some of the other linguistic components like stance, hate, and sarcasm. For socio-linguists, it provides an opportunity to explore how a movement manifests in social media across multiple countries.
<<</Introduction>>>
<<<Related Datasets>>>
Table TABREF3 presents a summary of datasets that contain social media posts about sexual abuse and annotated for various labels.
BIBREF2 created a dataset of 2,500 tweets for identification of malicious intent surrounding the cases of sexual assault. The tweets were annotated for labels like accusational, validation, sensational.
Khatua et al BIBREF3 collected 0.7 million tweets containing hashtags such as #MeToo, #AlyssaMilano, #harassed. The annotated a subset of 1024 tweets for the following assault-related labels: assault at the workplace by colleagues, assault at the educational institute by teachers or classmates, assault at public places by strangers, assault at home by a family member, multiple instances of assaults, or a generic tweet about sexual violence.
BIBREF4 created the Reddit Domestic Abuse Dataset, which contained 18,336 posts annotated for 2 classes, abuse and non-abuse.
BIBREF5 presented a dataset consisting of 5119 tweets distributed into recollection and non-recollection classes. The tweet was annotated as recollection if it explicitly mentioned a personal instance of sexual harassment.
Sharifirad et al BIBREF6 created a dataset with 3240 tweets labeled into three categories of sexism: Indirect sexism, casual sexism, physical sexism.
SVAC (Sexual Violence in Armed Conflict) is another related dataset which contains reports annotated for six different aspects of sexual violence: prevalence, perpetrators, victims, forms, location, and timing.
Unlike all the datasets described above, which are annotated for a single group of labels, our dataset is annotated for five different linguistic aspects. It also has more annotated samples than most of its contemporaries.
<<</Related Datasets>>>
<<<Dataset>>>
<<<Data Collection>>>
We focused our data collection over the period of October to December 2018 because October marked the one year anniversary of the MeToo movement. Our first step was to identify a list of countries where the movement was trending during the data collection period. To this end, we used Google's interactive tool named MeTooRisingWithGoogle, which visualizes search trends of the term "MeToo" across the globe. This helped us narrow down our query space to 16 countries.
We then scraped 500 random posts from online sexual harassment support forums to help identify keywords or phrases related to the movement . The posts were first manually inspected by the annotators to determine if they were related to the MeToo movement. Namely, if they contained self-disclosures of sexual violence, relevant information about the events associated with the movement, references to news articles or advertisements calling for support for the movement. We then processed the relevant posts to extract a set of uni-grams and bi-grams with high tf-idf scores. The annotators further pruned this set by removing irrelevant terms resulting in a lexicon of 75 keywords. Some examples include: #Sexual Harassment, #TimesUp, #EveryDaySexism, assaulted, #WhenIwas, inappropriate, workplace harassment, groped, #NotOkay, believe survivors, #WhyIDidntReport.
We then used Twitter's public streaming API to query for tweets from the selected countries, over the chosen three-month time frame, containing any of the keywords. This resulted in a preliminary corpus of 39,406 tweets. We further filtered this data down to include only English tweets based on tweet's language metadata field and also excluded short tweets (less than two tokens). Lastly, we de-duplicated the dataset based on the textual content. Namely, we removed all tweets that had more than 0.8 cosine similarity score on the unaltered text in tf-idf space with any another tweet. We employed this de-duplication to promote more lexical diversity in the dataset. After this filtering, we ended up with a corpus of 9,973 tweets.
Table TABREF14 presents the distribution of the tweets by country before and after the filtering process. A large portion of the samples is from India because the MeToo movement has peaked towards the end of 2018 in India. There are very few samples from Russia likely because of content moderation and regulations on social media usage in the country. Figure FIGREF15 gives a geographical distribution of the curated dataset.
Due to the sensitive nature of this data, we have decided to remove any personal identifiers (such as names, locations, and hyperlinks) from the examples presented in this paper. We also want to caution the readers that some of the examples in the rest of the paper, though censored for profanity, contain offensive language and express a harsh sentiment.
<<</Data Collection>>>
<<<Annotation Task>>>
We chose against crowd-sourcing the annotation process because of the sensitive nature of the data and also to ensure a high quality of annotations. We employed three domain experts who had advanced degrees in clinical psychology and gender studies. The annotators were first provided with the guidelines document, which included instructions about each task, definitions of class labels, and examples. They studied this document and worked on a few examples to familiarize themselves with the annotation task. They also provided feedback on the document, which helped us refine the instructions and class definitions. The annotation process was broken down into five sub-tasks: for a given tweet, the annotators were instructed to identify relevance, stance, hate speech, sarcasm, and dialogue act. An important consideration was that the sub-tasks were not mutually exclusive, implying that the presence of one label did not consequently mean an absence of any.
<<<Task 1: Relevance>>>
Here the annotators had to determine if the given tweet was relevant to the MeToo movement. Relevant tweets typically include personal opinions (either positive or negative), experiences of abuse, support for victims, or links to MeToo related news articles. Following are examples of a relevant tweet:
Officer [name] could be kicked out of the force after admitting he groped a woman at [place] festival last year. His lawyer argued saying the constable shouldn't be punished because of the #MeToo movement. #notokay #sexualabuse.
and an irrelevant tweet:
Had a bit of break. Went to the beautiful Port [place] and nearby areas. Absolutely stunning as usual. #beautiful #MeToo #Australia #auspol [URL].
We expect this relevance annotation could serve as a useful filter for downstream modeling.
<<</Task 1: Relevance>>>
<<<Task 2: Stance>>>
Stance detection is the task of determining if the author of a text is in favour or opposition of a particular target of interest BIBREF7, BIBREF8. Stance helps understand public opinion about a topic and also has downstream applications in information extraction, text summarization, and textual entailment BIBREF9. We categorized stance into three classes: Support, Opposition, Neither. Support typically included tweets that expressed appreciation of the MeToo movement, shared resources for victims of sexual abuse, or offered empathy towards victims. Following is an example of a tweet with a Support stance:
Opinion: #MeToo gives a voice to victims while bringing attention to a nationwide stigma surrounding sexual misconduct at a local level.[URL]. This should go on.
On the other hand, Opposition included tweets expressing dissent over the movement or demonstrating indifference towards the victims of sexual abuse or sexual violence. An example of an Opposition tweet is shown below:
The double standards and selective outrage make it clear that feminist concerns about power imbalances in the workplace aren't principles but are tools to use against powerful men they hate and wish to destroy. #fakefeminism. #men.
<<</Task 2: Stance>>>
<<<Task 3: Hate Speech>>>
Detection of hate speech in social media has been gaining interest from NLP researchers lately BIBREF10, BIBREF11. Our annotation scheme for hate speech is based on the work of BIBREF12. For a given tweet, the annotators first had to determine if it contained any hate speech. If the tweet was hateful, they had to identify if the hate was Directed or Generalized. Directed hate is targeted at a particular individual or entity, whereas Generalized hate is targeted at larger groups that belonged to a particular ethnicity, gender, or sexual orientation. Following are examples of tweets with Directed hate:
[username] were lit minus getting f*c*i*g mouthraped by some drunk chick #MeToo (no body cares because I'm a male) [URL]
and Generalized hate:
For the men who r asking "y not then, y now?", u guys will still doubt her & harrass her even more for y she shared her story immediately no matter what! When your sister will tell her childhood story to u one day, i challenge u guys to ask "y not then, y now?" #Metoo [username] [URL] #a**holes.
<<</Task 3: Hate Speech>>>
<<<Task 4: Sarcasm>>>
Sarcasm detection has also become a topic of interest for computational linguistics over the last few years BIBREF13, BIBREF14 with applications in areas like sentiment analysis and affective computing. Sarcasm was an integral part of the MeToo movement. For example, many women used the hashtag #NoWomanEver to sarcastically describe some of their experiences with harassment. We instructed the annotators to identify the presence of any sarcasm in a tweet either about the movement or about an individual or entity. Following is an example of a sarcastic tweet:
# was pound before it was a hashtag. If you replace hashtag with the pound in the #metoo, you get pound me too. Does that apply to [name].
<<</Task 4: Sarcasm>>>
<<<Task 5: Dialogue Acts>>>
A dialogue act is defined as the function of a speaker's utterance during a conversation BIBREF15, for example, question, answer, request, suggestion, etc. Dialogue Acts have been extensive studied in spoken BIBREF16 and written BIBREF17 conversations and have lately been gaining interest in social media BIBREF18. In this task, we introduced three new dialogue acts that are specific to the MeToo movement: Allegation, Refutation, and Justification.
Allegation: This category includes tweets that allege an individual or a group of sexual misconduct. The tweet could either be personal opinion or text summarizing allegations made against someone BIBREF19. The annotators were instructed to identify if the tweet includes the hypothesis of allegation based on first-hand account or a verifiable source confirming the allegation. Following is an example of a tweet that qualifies as an Allegation:
More women accuse [name] of grave sexual misconduct...twitter seethes with anger. #MeToo #pervert.
Refutation: This category contains tweets where an individual or an organization is denying allegations with or without evidence. Following is an example of a Refutation tweet:
She is trying to use the #MeToo movement to settle old scores, says [name1] after [name2] levels sexual assault allegations against him.
Justification: The class includes tweets where the author is justifying their actions. These could be alleged actions in the real world (e.g. allegation of sexual misconduct) or some action performed on twitter (e.g. supporting someone who was alleged of misconduct). Following is an example of a tweet that would be tagged as Justification:
I actually did try to report it, but he and of his friends got together and lied to the police about it. #WhyIDidNotReport.
<<</Task 5: Dialogue Acts>>>
<<</Annotation Task>>>
<<</Dataset>>>
<<<Dataset Analysis>>>
This section includes descriptive and quantitative analysis performed on the dataset.
<<<Inter-annotator agreement>>>
We evaluated inter-annotator agreements using Krippendorff's alpha (K-alpha) BIBREF20. K-alpha, unlike simple agreement measures, accounts for chance correction and class distributions and can be generalized to multiple annotators. Table TABREF27 summarizes the K-alpha measures for all the annotation tasks. We observe very strong agreements for most of the tasks with a maximum of 0.92 for the relevance task. The least agreement observed was for the hate speech task at 0.78. Per recommendations in BIBREF21, we conclude that these annotations are of good quality. We chose a straightforward approach of majority decision for label adjudication: if two or more annotators agreed on assigning a particular class label. In cases of discrepancy, the labels were adjudicated manually by the authors. Table TABREF28 shows a distribution of class labels after adjudication.
<<</Inter-annotator agreement>>>
<<<Geographical Distribution>>>
Figure FIGREF24 presents a distribution of all the tweets by their country of origin. As expected, a large portion of the tweets across all classes are from India, which is consistent with Table TABREF14. Interestingly, the US contributes comparatively a smaller proportion of tweets to Justification category, and likewise, UK contributes a lower portion of tweets to the Generalized Hate category. Further analysis is necessary to establish if these observations are statistically significant.
<<</Geographical Distribution>>>
<<<Label Correlations>>>
We conducted a simple experiment to understand the linguistic similarities (or lack thereof) for different pairs of class labels both within and across tasks. To this end, for each pair of labels, we converted the data into its tf-idf representation and then estimated Pearson, Spearman, and Kendall Tau correlation coefficients and also the corresponding $p$ values. The results are summarized in Table TABREF32. Overall, the correlation values seem to be on a lower end with maximum Pearson's correlation value obtained for the label pair Justification - Support, maximum Kendall Tau's correlation for Allegation - Support, and maximum Spearman's correlation for Directed Hate - Generalized Hate. The correlations are statistically significant ($p$ $<$ 0.05) for three pairs of class labels: Directed Hate - Generalized Hate, Directed Hate - Opposition, Sarcasm - Opposition. Sarcasm and Allegation also have statistically significant $p$ values for Pearson and Spearman correlations.
<<</Label Correlations>>>
<<<Keywords>>>
We used SAGE BIBREF22, a topic modelling method, to identify keywords associated with the various class labels in our dataset. SAGE is an unsupervised generative model that can identify words that distinguish one part of the corpus from rest. For our keyword analysis, we removed all the hashtags and only considered tokens that appeared at least five times in the corpus, thus ensuring they were representative of the topic. Table TABREF25 presents the top five keywords associated with each class and also their salience scores. Though Directed and Generalized hate are closely related topics, there is not much overlap between the top 5 salient keywords suggesting that there are linguistic cues to distinguish between them. The word predators is strongly indicative of Generalized Hate, which is intuitive because it is a term often used to describe people who were accused of sexual misconduct. The word lol being associated with Sarcasm is also reasonably intuitive because of sarcasm's close relation with humour.
<<</Keywords>>>
<<<Sentiment Analysis>>>
Figure FIGREF29 presents a word cloud representation of the data where the colours are assigned based on NRC emotion lexicon BIBREF23: green for positive and red for negative. We also analyzed all the classes in terms of Valence, Arousal, and Dominance using the NRC VAD lexicon BIBREF24. The results are summarized in Figure FIGREF33. Of all the classes, Directed-Hate has the largest valence spread, which is likely because of the extreme nature of the opinions expressed in such tweets. The spread for the dominance is fairly narrow for all class labels with the median score slightly above 0.5, suggesting a slightly dominant nature exhibited by the authors of the tweets.
<<</Sentiment Analysis>>>
<<</Dataset Analysis>>>
<<<Discussion>>>
This paper introduces a new dataset containing tweets related to the #MeToo movement. It may involve opinions over socially stigmatized issues or self-reports of distressing incidents. Therefore, it is necessary to examine the social impact of this exercise, the ethics of the individuals concerned with the dataset, and it's limitations.
Mental health implications: This dataset open sources posts curated by individuals who may have undergone instances of sexual exploitation in the past. While we respect and applaud their decision to raise their voices against their exploitation, we also understand that their revelations may have been met with public backlash and apathy in both the virtual as well as the real world. In such situations, where the social reputation of both accuser and accused may be under threat, mental health concerns become very important. As survivors recount their horrific episodes of sexual harassment, it becomes imperative to provide them with therapeutic care BIBREF25 as a safeguard against mental health hazards. Such measures, if combined with the integration of mental health assessment tools in social media platforms, can make victims of sexual abuse feel more empowered and self-contemplative towards their revelations.
Use of MeTooMA dataset for population studies: We would like to mention that there have been no attempts to conduct population-centric analysis on the proposed dataset. The analysis presented in this dataset should be seen as a proof of concept to examine the instances of #MeToo movement on Twitter. The authors acknowledge that learning from this dataset cannot be used as-is for any direct social interventions. Network sampling of real-world users for any experimental work beyond this dataset would require careful evaluation beyond the observational analysis presented herein. Moreover, the findings could be used to assist already existing human knowledge. Experiences of the affected communities should be recorded and analyzed carefully, which could otherwise lead to social stigmatization, discrimination and societal bias. Enough care has been ensured so that this work does not come across as trying to target any specific individual for their personal stance on the issues pertaining to the social theme at hand. The authors do not aim to vilify individuals accused in the #MeToo cases in any manner. Our work tries to bring out general trends that may help researchers develop better techniques to understand mass unorganized virtual movements.
Effect on marginalized communities: The authors recognize the impact of the #MeToo movement on socially stigmatized populations like LGBTQIA+. The #MeToo movement provided such individuals with the liberty to express their notions about instances of sexual violence and harassment. The movement acted as a catalyst towards implementing social policy changes to benefit the members of these communities. Hence, it is essential to keep in mind that any experimental work undertaken on this dataset should try to minimize the biases against the minority groups which might get amplified in cases of sudden outburst of public reactions over sensitive media discussions.
Limitations of individual consent: Considering the mental health aspects of the individuals concerned, social media practitioners should vary of making automated interventions to aid the victims of sexual abuse as some individuals might not prefer to disclose their sexual identities or notions. Concerned social media users might also repeal their social media information if found out that their personal information may be potentially utilised for computational analysis. Hence, it is imperative to seek subtle individual consent before trying to profile authors involved in online discussions to uphold personal privacy.
<<</Discussion>>>
<<<Use Cases>>>
The authors would like to formally propose some ideas on possible extensions of the proposed dataset:
The rise of online hate speech and its related behaviours like cyber-bullying has been a hot topic of research in gender studies BIBREF26. Our dataset could be utilized for extracting actionable insights and virtual dynamics to identify gender roles for analyzing sexual abuse revelations similar to BIBREF27.
The dataset could be utilized by psycholinguistics for extracting contextualized lexicons to examine how influential people are portrayed on public platforms in events of mass social media movements BIBREF28. Interestingly, such analysis may help linguists determine the power dynamics of authoritative people in terms of perspective and sentiment through campaign modelling.
Marginalized voices affected by mass social movements can be studied through polarization analysis on graph-based simulations of the social media networks. Based on the data gathered from these nodes, community interactions could be leveraged to identify indigenous issues pertaining to societal unrest across various sections of the societyBIBREF29.
Challenge Proposal: The authors of the paper would like to extend the present work as a challenge proposal for building computational semantic analysis systems aimed at online social movements. In contrast to already available datasets and existing challenges, we propose tasks on detecting hate speech, sarcasm, stance and relevancy that will be more focused on social media activities surrounding revelations of sexual abuse and harassment. The tasks may utilize the message-level text, linked images, tweet-level metadata and user-level interactions to model systems that are Fair, Accountable, Interpretable and Responsible (FAIR).
Research ideas emerging from this work should not be limited to the above discussion. If needed, supplementary data required to enrich this dataset can be collected utilizing Twitter API and JSON records for exploratory tasks beyond the scope of the paper.
<<</Use Cases>>>
<<<Conclusion>>>
In this paper, we presented a new dataset annotated for five different linguistic aspects: relevance, stance, hate speech, sarcasm, and dialogue acts. To our knowledge, there are no datasets out there that provide annotations across so many different dimensions. This allows researchers to perform various multi-label and multi-aspect classification experiments. Additionally, researchers could also address some interesting questions on how different linguistic components influence each other: e.g. does understanding one's stance help in better prediction of hate speech?
In addition to these exciting computational challenges, we expect this data could be useful for socio and psycholinguists in understanding the language used by victims when disclosing their experiences of abuse. Likewise, they could analyze the language used by alleged individuals in justifying their actions. It also provides a chance to examine the language used to express hate in the context of sexual abuse.
In the future, we would like to propose challenge tasks around this data where the participants will have to build computational models to capture all the different linguistic aspects that were annotated. We expect such a task would drive researchers to ask more interesting questions, find limitations of the dataset, propose improvements, and provide interesting insights.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Introduction, Conclusion"
],
"type": "disordered_section"
}
|
1909.01247
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Introducing RONEC -- the Romanian Named Entity Corpus
<<<Abstract>>>
We present RONEC - the Named Entity Corpus for the Romanian language. The corpus contains over 26000 entities in ~5000 annotated sentences, belonging to 16 distinct classes. The sentences have been extracted from a copy-right free newspaper, covering several styles. This corpus represents the first initiative in the Romanian language space specifically targeted for named entity recognition. It is available in BRAT and CoNLL-U Plus formats, and it is free to use and extend at github.com/dumitrescustefan/ronec .
<<</Abstract>>>
<<<Introduction>>>
Language resources are an essential component in entire R&D domains. From the humble but vast repositories of monolingual texts that are used by the newest language modeling approaches like BERT and GPT, to parallel corpora that allows our machine translation systems to inch closer to human performance, to the more specialized resources like WordNets that encode semantic relations between nodes, these resources are necessary for the general advancement of Natural Language Processing, which eventually evolves into real apps and services we are (already) taking for granted.
We introduce RONEC - the ROmanian Named Entity Corpus, a free, open-source resource that contains annotated named entities in copy-right free text.
A named entity corpus is generally used for Named Entity Recognition (NER): the identification of entities in text such as names of persons, locations, companies, dates, quantities, monetary values, etc. This information would be very useful for any number of applications: from a general information extraction system down to task-specific apps such as identifying monetary values in invoices or product and company references in customer reviews.
We motivate the need for this corpus primarily because, for Romanian, there is no other such corpus. This basic necessity has sharply arisen as we, while working on a different project, have found out there are no usable resources to help us in an Information Extraction task: we were unable to extract people, locations or dates/values. This constituted a major road-block, with the only solution being to create such a corpus ourselves. As the corpus was out-of-scope for this project, the work was done privately, outside the umbrella of any authors' affiliations - this is why we are able to distribute this corpus completely free.
The current landscape in Romania regarding language resources is relatively unchanged from the outline given by the META-NET project over six years ago. The in-depth analysis performed in this European-wide Horizon2020-funded project revealed that the Romanian language falls in the "fragmentary support" category, just above the last, "weak/none" category (see the language/support matrix in BIBREF3). This is why, in 2019/2020, we are able to present the first NER resource for Romanian.
<<<Related corpora>>>
We note that, while fragmentary, there are a few related language resources available, but none that specifically target named entities:
<<<ROCO corpus>>>
ROCO BIBREF4 is a Romanian journalistic corpus that contains approx. 7.1M tokens. It is rich in proper names, numerals and named entities. The corpus has been automatically annotated at word-level with morphosyntactic information (MSD annotations).
<<</ROCO corpus>>>
<<<ROMBAC corpus>>>
Released in 2016, ROMBAC BIBREF5 is a Romanian text corpus containing 41M words divided in relatively equal domains like journalism, legalese, fiction, medicine, etc. Similarly to ROCO, it is automatically annotated at word level with MSD descriptors.
<<</ROMBAC corpus>>>
<<<CoRoLa corpus>>>
The much larger and recently released CoRoLa corpus BIBREF6 contains over 1B words, similarly automatically annotated.
In all these corpora the named entities are not a separate category - the texts are morphologically and syntactically annotated and all proper nouns are marked as such - NP - without any other annotation or assigned category. Thus, these corpora cannot be used in a true NER sense. Furthermore, annotations were done automatically with a tokenizer/tagger/parser, and thus are of slightly lower quality than one would expect of a gold-standard corpus.
<<</CoRoLa corpus>>>
<<</Related corpora>>>
<<</Introduction>>>
<<<Corpus Description>>>
The corpus, at its current version 1.0 is composed of 5127 sentences, annotated with 16 classes, for a total of 26377 annotated entities. The 16 classes are: PERSON, NAT_REL_POL, ORG, GPE, LOC, FACILITY, PRODUCT, EVENT, LANGUAGE, WORK_OF_ART, DATETIME, PERIOD, MONEY, QUANTITY, NUMERIC_VALUE and ORDINAL.
It is based on copyright-free text extracted from Southeast European Times (SETimes). The news portal has published “news and views from Southeast Europe” in ten languages, including Romanian. SETimes has been used in the past for several annotated corpora, including parallel corpora for machine translation. For RONEC we have used a hand-picked selection of sentences belonging to several categories (see table TABREF16 for stylistic examples).
The corpus contains the standard diacritics in Romanian: letters ș and ț are written with a comma, not with a cedilla (like ş and ţ). In Romanian many older texts are written with cedillas instead of commas because full Unicode support in Windows came much later than the classic extended Ascii which only contained the cedilla letters.
The 16 classes are inspired by the OntoNotes5 corpus BIBREF7 as well as the ACE (Automatic Content Extraction) English Annotation Guidelines for Entities Version 6.6 2008.06.13 BIBREF8. Each class will be presented in detail, with examples, in the section SECREF3 A summary of available classes with word counts for each is available in table TABREF18.
The corpus is available in two formats: BRAT and CoNLL-U Plus.
<<<BRAT format>>>
As the corpus was developed in the BRAT environment, it was natural to keep this format as-is. BRAT is an online environment for collaborative text annotation - a web-based tool where several people can mark words, sub-word pieces, multiple word expressions, can link them together by relations, etc. The back-end format is very simple: given a text file that contains raw sentences, in another text file every annotated entity is specified by the start/end character offset as well as the entity type, one per line. RONEC is exported in the BRAT format as ready-to-use in the BRAT annotator itself. The corpus is pre-split into sub-folders, and contains all the extra files such as the entity list, etc, needed to directly start an eventual edit/extension of the corpus.
Example (raw/untokenized) sentences:
Tot în cadrul etapei a 2-a, a avut loc întâlnirea Vardar Skopje - S.C. Pick Szeged, care s-a încheiat la egalitate, 24 - 24.
I s-a decernat Premiul Nobel pentru literatură pe anul 1959.
Example annotation format:
T1 ORDINAL 21 26 a 2-a
T2 ORGANIZATION 50 63 Vardar Skopje
T3 ORGANIZATION 66 82 S.C. Pick Szeged
T4 NUMERIC_VALUE 116 118 24
T5 NUMERIC_VALUE 121 123 24
T6 DATETIME 175 184 anul 1959
<<</BRAT format>>>
<<<CoNLL-U Plus format>>>
The CoNLL-U Plus format extends the standard CoNLL-U which is used to annotate sentences, and in which many corpora are found today. The CoNLL-U format annotates one word per line with 10 distinct "columns" (tab separated):
nolistsep
ID: word index;
FORM: unmodified word from the sentence;
LEMMA: the word's lemma;
UPOS: Universal part-of-speech tag;
XPOS: Language-specific part-of-speech tag;
FEATS: List of morphological features from the universal feature inventory or from a defined language-specific extension;
HEAD: Head of the current word, which is either a value of ID or zero;
DEPREL: Universal dependency relation to the HEAD or a defined language-specific subtype of one;
DEPS: Enhanced dependency graph in the form of a list of head-deprel pairs;
MISC: Miscellaneous annotations such as space after word.
The CoNLL-U Plus extends this format by allowing a variable number of columns, with the restriction that the columns are to be defined in the header. For RONEC, we define our CoNLL-U Plus format as the standard 10 columns plus another extra column named RONEC:CLASS. This column has the following format: nolistsep
[noitemsep]
each named entity has a distinct id in the sentence, starting from 1; as an entity can span several words, all words that belong to it have the same id (no relation to word indexes)
the first word belonging to an entity also contains its class (e.g. word "John" in entity "John Smith" will be marked as "1:PERSON")
a non-entity word is marked with an asterisk *
Table TABREF37 shows the CoNLL-U Plus format where for example "a 2-a" is an ORDINAL entity spanning 3 words. The first word "a" is marked in this last column as "1:ORDINAL" while the following words just with the id "1".
The CoNLL-U Plus format we provide was created as follows: (1) annotate the raw sentences using the NLP-Cube tool for Romanian (it provides everything from tokenization to parsing, filling in all attributes in columns #1-#10; (2) align each token with the human-made entity annotations from the BRAT environment (the alignment is done automatically and is error-free) and fill in column #11.
<<</CoNLL-U Plus format>>>
<<</Corpus Description>>>
<<<Classes and Annotation Methodology>>>
For the English language, we found two "categories" of NER annotations to be more prominent: CoNLL- and ACE-style. Because CoNLL only annotates a few classes (depending on the corpora, starting from the basic three: PERSON, ORGANIZATION and LOCATION, up to seven), we chose to follow the ACE-style with 18 different classes. After analyzing the ACE guide we have settled on 16 final classes that seemed more appropriate for Romanian, seen in table TABREF18.
In the following sub-sections we will describe each class in turn, with a few examples. Some examples have been left in Romanian while some have been translated in English for the reader's convenience. In the examples at the end of each class' description, translations in English are colored for easier reading.
<<<PERSON>>>
Persons, including fictive characters. We also mark common nouns that refer to a person (or several), including pronouns (us, them, they), but not articles (e.g. in "an individual" we don't mark "an"). Positions are not marked unless they directly refer to the person: "The presidential counselor has advised ... that a new counselor position is open.", here we mark "presidential counselor" because it refers to a person and not the "counselor" at the end of the sentence as it refers only to a position.
Locul doi i-a revenit româncei Otilia Aionesei, o elevă de 17 ani.
green!55!blueThe second place was won by Otilia Aionesei, a 17 year old student.
Ministrul bulgar pentru afaceri europene, Meglena Kuneva ...
green!55!blueThe Bulgarian Minister for European Affairs, Meglena Kuneva ...
<<</PERSON>>>
<<<NAT_REL_POL>>>
These are nationalities or religious or political groups. We include words that indicate the nationality of a person, group or product/object. Generally words marked as NAT_REl_POL are adjectives.
avionul american
green!55!bluethe American airplane
Grupul olandez
green!55!bluethe Dutch group
Grecii iși vor alege președintele.
green!55!blueThe Greeks will elect their president.
<<</NAT_REL_POL>>>
<<<ORGANIZATION>>>
Companies, agencies, institutions, sports teams, groups of people. These entities must have an organizational structure. We only mark full organizational entities, not fragments, divisions or sub-structures.
Universitatea Politehnica București a decis ...
green!55!blueThe Politehnic University of Bucharest has decided ...
Adobe Inc. a lansat un nou produs.
green!55!blueAdobe Inc. has launched a new product.
<<</ORGANIZATION>>>
<<<GPE>>>
Geo-political entities: countries, counties, cities, villages. GPE entities have all of the following components: (1) a population, (2) a well-defined governing/organizing structure and (3) a physical location. GPE entities are not sub-entities (like a neighbourhood from a city).
Armin van Buuren s-a născut în Leiden.
green!55!blueArmin van Buuren was born in Leiden.
U.S.A. ramane indiferentă amenințărilor Coreei de Nord.
green!55!blueU.S.A. remains indifferent to North Korea's threats.
<<</GPE>>>
<<<LOC>>>
Non-geo-political locations: mountains, seas, lakes, streets, neighbourhoods, addresses, continents, regions that are not GPEs. We include regions such as Middle East, "continents" like Central America or East Europe. Such regions include multiple countries, each with its own government and thus cannot be GPEs.
Pe DN7 Petroșani-Obârșia Lotrului carosabilul era umed, acoperit (cca 1 cm) cu zăpadă, iar de la Obârșia Lotrului la stațiunea Vidra, stratul de zăpadă era de 5-6 cm.
green!55!blueOn DN7 Petroșani-Obârșia Lotrului the road was wet, covered (about 1cm) with snow, and from Obârșia Lotrului to Vidra resort the snow depth was around 5-6 cm.
Produsele comercializate în Europa de Est au o calitate inferioară celor din vest.
green!55!blueProducts sold in East Europe have a lower quality than those sold in the west.
<<</LOC>>>
<<<FACILITY>>>
Buildings, airports, highways, bridges or other functional structures built by humans. Buildings or other structures which house people, such as homes, factories, stadiums, office buildings, prisons, museums, tunnels, train stations, etc., named or not. Everything that falls within the architectural and civil engineering domains should be labeled as a FACILITY. We do not mark structures composed of multiple (and distinct) sub-structures, like a named area that is composed of several buildings, or "micro"-structures such as an apartment (as it a unit of an apartment building). However, larger, named functional structures can still be marked (such as "terminal X" of an airport).
Autostrada A2 a intrat în reparații pe o bandă, însă pe A1 nu au fost încă începute lucrările.
green!55!blueRepairs on one lane have commenced on the A2 highway, while on A1 no works have started yet.
Aeroportul Henri Coandă ar putea sa fie extins cu un nou terminal.
green!55!blueHenri Coandă Airport could be extended with a new terminal.
<<</FACILITY>>>
<<<PRODUCT>>>
Objects, cars, food, items, anything that is a product, including software (such as Photoshop, Word, etc.). We don't mark services or processes. With very few exceptions (such as software products), PRODUCT entities have to have physical form, be directly man-made. We don't mark entities such as credit cards, written proofs, etc. We don't include the producer's name unless it's embedded in the name of the product.
Mașina cumpărată este o Mazda.
green!55!blueThe bought car is a Mazda.
S-au cumpărat 5 Ford Taurus și 2 autobuze Volvo.
green!55!blue5 Ford Taurus and 2 Volvo buses have been acquired.
<<</PRODUCT>>>
<<<EVENT>>>
Named events: Storms (e.g.:"Sandy"), battles, wars, sports events, etc. We don't mark sports teams (they are ORGs), matches (e.g. "Steaua-Rapid" will be marked as two separate ORGs even if they refer to a football match between the two teams, but the match is not specific). Events have to be significant, with at least national impact, not local.
Războiul cel Mare, Războiul Națiunilor, denumit, în timpul celui de Al Doilea Război Mondial, Primul Război Mondial, a fost un conflict militar de dimensiuni mondiale.
green!55!blueThe Great War, War of the Nations, as it was called during the Second World War, the First World War was a global-scale military conflict.
<<</EVENT>>>
<<<LANGUAGE>>>
This class represents all languages.
Românii din România vorbesc română.
green!55!blueRomanians from Romania speak Romanian.
În Moldova se vorbește rusa și româna.
green!55!blueIn Moldavia they speak Russian and Romanian.
<<</LANGUAGE>>>
<<<WORK_OF_ART>>>
Books, songs, TV shows, pictures; everything that is a work of art/culture created by humans. We mark just their name. We don't mark laws.
Accesul la Mona Lisa a fost temporar interzis vizitatorilor.
green!55!blueAccess to Mona Lisa was temporarily forbidden to visitors.
În această seară la Vrei sa Fii Miliardar vom avea un invitat special.
green!55!blueThis evening in Who Wants To Be A Millionaire we will have a special guest.
<<</WORK_OF_ART>>>
<<<DATETIME>>>
Date and time values. We will mark full constructions, not parts, if they refer to the same moment (e.g. a comma separates two distinct DATETIME entities only if they refer to distinct moments). If we have a well specified period (e.g. "between 20-22 hours") we mark it as PERIOD, otherwise less well defined periods are marked as DATETIME (e.g.: "last summer", "September", "Wednesday", "three days"); Ages are marked as DATETIME as well. Prepositions are not included.
Te rog să vii aici în cel mult o oră, nu mâine sau poimâine.
green!55!bluePlease come here in one hour at most, not tomorrow or the next day.
Actul s-a semnat la orele 16.
green!55!blueThe paper was signed at 16 hours.
August este o lună secetoasă.
green!55!blueAugust is a dry month.
Pe data de 20 martie între orele 20-22 va fi oprită alimentarea cu curent.
green!55!blueOn the 20th of March, between 20-22 hours, electricity will be cut-off.
<<</DATETIME>>>
<<<PERIOD>>>
Periods/time intervals. Periods have to be very well marked in text. If a period is not like "a-b" then it is a DATETIME.
Spectacolul are loc între 1 și 3 Aprilie.
green!55!blueThe show takes place between 1 and 3 April.
În prima jumătate a lunii iunie va avea loc evenimentul de două zile.
green!55!blueIn the first half of June the two-day event will take place.
<<</PERIOD>>>
<<<MONEY>>>
Money, monetary values, including units (e.g. USD, $, RON, lei, francs, pounds, Euro, etc.) written with number or letters. Entities that contain any monetary reference, including measuring units, will be marked as MONEY (e.g. 10$/sqm, 50 lei per hour). Words that are not clear values will not be marked, such as "an amount of money", "he received a coin".
Primarul a semnat un contract în valoare de 10 milioane lei noi, echivalentul a aproape 2.6m EUR.
green!55!blueThe mayor signed a contract worth 10 million new lei, equivalent of almost 2.6m EUR.
<<</MONEY>>>
<<<QUANTITY>>>
Measurements, such as weight, distance, etc. Any type of quantity belongs in this class.
Conducătorul auto avea peste 1g/ml alcool în sânge, fiind oprit deoarece a fost prins cu peste 120 km/h în localitate.
green!55!blueThe car driver had over 1g/ml blood alcohol, and was stopped because he was caught speeding with over 120km/h in the city.
<<</QUANTITY>>>
<<<NUMERIC_VALUE>>>
Any numeric value (including phone numbers), written with letters or numbers or as percents, which is not MONEY, QUANTITY or ORDINAL.
Raportul XII-2 arată 4 552 de investitori, iar structura de portofoliu este: cont curent 0,05%, certificate de trezorerie 66,96%, depozite bancare 13,53%, obligațiuni municipale 19,46%.
green!55!blueThe XII-2 report shows 4 552 investors, and the portfolio structure is: current account 0,05%, treasury bonds 66,96%, bank deposits 13,53%, municipal bonds 19,46%.
<<</NUMERIC_VALUE>>>
<<<ORDINAL>>>
The first, the second, last, 30th, etc.; An ordinal must imply an order relation between elements. For example, "second grade" does not involve a direct order relation; it indicates just a succession in grades in a school system.
Primul loc a fost ocupat de echipa Germaniei.
green!55!blueThe first place was won by Germany's team.
The corpus creation process involved a small number of people that have voluntarily joined the initiative, with the authors of this paper directing the work. Initially, we searched for NER resources in Romanian, and found none. Then we looked at English resources and read the in-depth ACE guide, out of which a 16-class draft evolved. We then identified a copy-right free text from which we hand-picked sentences to maximize the amount of entities while maintaining style balance. The annotation process was a trial-and-error, with cycles composed of annotation, discussing confusing entities, updating the annotation guide schematic and going through the corpus section again to correct entities following guide changes. The annotation process was done online, in BRAT. The actual annotation involved 4 people, has taken about 6 months (as work was volunteer-based, we could not have reached for 100% time commitment from the people involved), and followed the steps:
nolistsep
Each person would annotate the full corpus (this included the cycles of shaping up the annotation guide, and re-annotation). Inter-annotator agreement (ITA) at this point was relatively low, at 60-70%, especially for a number of classes.
We then automatically merged all annotations, with the following criterion: if 3 of the 4 annotators agreed on an entity (class&start-stop), then it would go unchanged; otherwise mark the entity (longest span) as CONFLICTED.
Two teams were created, each with two persons. Each team annotated the full corpus again, starting from the previous step. At this point, class-average ITA has risen to over 85%.
Next, the same automatic merging happened, this time entities remained unchanged if both annotations agreed.
Finally, one of the authors went through the full corpus one more time, correcting disagreements.
We would like to make a few notes regarding classes and inter-annotator agreements:
nolistsep
[noitemsep]
Classes like ORGANIZATION, NAT_REL_POL, LANGUAGE or GPEs have the highest ITA, over 98%. They are pretty clear and distinct from other classes.
The DATETIME class also has a high ITA, with some overlap with PERIOD: annotators could fall-back if they were not sure that an expression was a PERIOD and simply mark it as DATETIME.
WORK_OF_ART and EVENTs have caused some problems because the scope could not be properly defined from just one sentence. For example, a fair in a city could be a local event, but could also be a national periodic event.
MONEY, QUANTITY and ORDINAL all are more specific classes than NUMERIC_VALUE. So, in cases where a numeric value has a unit of measure by it, it should become a QUANTITY, not a NUMERIC_VALUE. However, this "specificity" has created some confusion between these classes, just like with DATETIME and PERIOD.
The ORDINAL class is a bit ambiguous, because, even though it ranks "higher" than NUMERIC_VALUE, it is the least diverse, most of the entities following the same patterns.
PRODUCT and FACILITY classes have the lowest ITA by far (less than 40% in the first annotation cycle, less than 70% in the second). We actually considered removing these classes from the annotation process, but to try to mimic the OntoNotes classes as much as possible we decided to keep them in. There were many cases where the annotators disagreed about the scope of words being facilities or products. Even in the ACE guidelines these two classes are not very well "documented" with examples of what is and what is not a PRODUCT or FACILITY. Considering that these classes are, in our opinion, of the lowest importance among all the classes, a lower ITA was accepted.
Finally, we would like to address the "semantic scope" of the entities - for example, for class PERSON, we do not annotate only proper nouns (NPs) but basically any reference to a person (e.g. through pronouns "she", job position titles, common nouns such as "father", etc.). We do this because we would like a high-coverage corpus, where entities are marked as more semantically-oriented rather than syntactically - in the same way ACE entities are more encompassing than CoNLL entities. We note that, for example, if one would like strict proper noun entities, it is very easy to extract from a PERSON multi-word entity only those words which are syntactically marked (by any tagger) as NPs.
<<</ORDINAL>>>
<<</Classes and Annotation Methodology>>>
<<<Conclusions>>>
We have presented RONEC - the first Named Entity Corpus for the Romanian language. At its current version, in its 5127 sentences we have 26377 annotated entities in 16 different classes. The corpus is based on copy-right free text, and is released as open-source, free to use and extend.
We hope that in time this corpus will grow in size and mature towards a strong resource for Romanian. For this to happen we have released the corpus in two formats: CoNLL-U PLus, which is a text-based tab-separated pre-tokenized and annotated format that is simple to use, and BRAT, which is practically plug-and-play into the BRAT web annotation tool where anybody can add and annotate new sentences. Also, in the GitHub repo there are automatic alignment and conversion script to and from the two formats so they could easily be exported between.
Finally, we have also provided an annotation guide that we will improve, and in time evolve into a full annotation document like the ACE Annotation Guidelines for Entities V6.6 BIBREF8.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Abstract, Introduction"
],
"type": "disordered_section"
}
|
1909.01247
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Introducing RONEC -- the Romanian Named Entity Corpus
<<<Abstract>>>
We present RONEC - the Named Entity Corpus for the Romanian language. The corpus contains over 26000 entities in ~5000 annotated sentences, belonging to 16 distinct classes. The sentences have been extracted from a copy-right free newspaper, covering several styles. This corpus represents the first initiative in the Romanian language space specifically targeted for named entity recognition. It is available in BRAT and CoNLL-U Plus formats, and it is free to use and extend at github.com/dumitrescustefan/ronec .
<<</Abstract>>>
<<<Introduction>>>
Language resources are an essential component in entire R&D domains. From the humble but vast repositories of monolingual texts that are used by the newest language modeling approaches like BERT and GPT, to parallel corpora that allows our machine translation systems to inch closer to human performance, to the more specialized resources like WordNets that encode semantic relations between nodes, these resources are necessary for the general advancement of Natural Language Processing, which eventually evolves into real apps and services we are (already) taking for granted.
We introduce RONEC - the ROmanian Named Entity Corpus, a free, open-source resource that contains annotated named entities in copy-right free text.
A named entity corpus is generally used for Named Entity Recognition (NER): the identification of entities in text such as names of persons, locations, companies, dates, quantities, monetary values, etc. This information would be very useful for any number of applications: from a general information extraction system down to task-specific apps such as identifying monetary values in invoices or product and company references in customer reviews.
We motivate the need for this corpus primarily because, for Romanian, there is no other such corpus. This basic necessity has sharply arisen as we, while working on a different project, have found out there are no usable resources to help us in an Information Extraction task: we were unable to extract people, locations or dates/values. This constituted a major road-block, with the only solution being to create such a corpus ourselves. As the corpus was out-of-scope for this project, the work was done privately, outside the umbrella of any authors' affiliations - this is why we are able to distribute this corpus completely free.
The current landscape in Romania regarding language resources is relatively unchanged from the outline given by the META-NET project over six years ago. The in-depth analysis performed in this European-wide Horizon2020-funded project revealed that the Romanian language falls in the "fragmentary support" category, just above the last, "weak/none" category (see the language/support matrix in BIBREF3). This is why, in 2019/2020, we are able to present the first NER resource for Romanian.
<<<Related corpora>>>
We note that, while fragmentary, there are a few related language resources available, but none that specifically target named entities:
<<<ROCO corpus>>>
ROCO BIBREF4 is a Romanian journalistic corpus that contains approx. 7.1M tokens. It is rich in proper names, numerals and named entities. The corpus has been automatically annotated at word-level with morphosyntactic information (MSD annotations).
<<</ROCO corpus>>>
<<<ROMBAC corpus>>>
Released in 2016, ROMBAC BIBREF5 is a Romanian text corpus containing 41M words divided in relatively equal domains like journalism, legalese, fiction, medicine, etc. Similarly to ROCO, it is automatically annotated at word level with MSD descriptors.
<<</ROMBAC corpus>>>
<<<CoRoLa corpus>>>
The much larger and recently released CoRoLa corpus BIBREF6 contains over 1B words, similarly automatically annotated.
In all these corpora the named entities are not a separate category - the texts are morphologically and syntactically annotated and all proper nouns are marked as such - NP - without any other annotation or assigned category. Thus, these corpora cannot be used in a true NER sense. Furthermore, annotations were done automatically with a tokenizer/tagger/parser, and thus are of slightly lower quality than one would expect of a gold-standard corpus.
<<</CoRoLa corpus>>>
<<</Related corpora>>>
<<</Introduction>>>
<<<Corpus Description>>>
The corpus, at its current version 1.0 is composed of 5127 sentences, annotated with 16 classes, for a total of 26377 annotated entities. The 16 classes are: PERSON, NAT_REL_POL, ORG, GPE, LOC, FACILITY, PRODUCT, EVENT, LANGUAGE, WORK_OF_ART, DATETIME, PERIOD, MONEY, QUANTITY, NUMERIC_VALUE and ORDINAL.
It is based on copyright-free text extracted from Southeast European Times (SETimes). The news portal has published “news and views from Southeast Europe” in ten languages, including Romanian. SETimes has been used in the past for several annotated corpora, including parallel corpora for machine translation. For RONEC we have used a hand-picked selection of sentences belonging to several categories (see table TABREF16 for stylistic examples).
The corpus contains the standard diacritics in Romanian: letters ș and ț are written with a comma, not with a cedilla (like ş and ţ). In Romanian many older texts are written with cedillas instead of commas because full Unicode support in Windows came much later than the classic extended Ascii which only contained the cedilla letters.
The 16 classes are inspired by the OntoNotes5 corpus BIBREF7 as well as the ACE (Automatic Content Extraction) English Annotation Guidelines for Entities Version 6.6 2008.06.13 BIBREF8. Each class will be presented in detail, with examples, in the section SECREF3 A summary of available classes with word counts for each is available in table TABREF18.
The corpus is available in two formats: BRAT and CoNLL-U Plus.
<<<BRAT format>>>
As the corpus was developed in the BRAT environment, it was natural to keep this format as-is. BRAT is an online environment for collaborative text annotation - a web-based tool where several people can mark words, sub-word pieces, multiple word expressions, can link them together by relations, etc. The back-end format is very simple: given a text file that contains raw sentences, in another text file every annotated entity is specified by the start/end character offset as well as the entity type, one per line. RONEC is exported in the BRAT format as ready-to-use in the BRAT annotator itself. The corpus is pre-split into sub-folders, and contains all the extra files such as the entity list, etc, needed to directly start an eventual edit/extension of the corpus.
Example (raw/untokenized) sentences:
Tot în cadrul etapei a 2-a, a avut loc întâlnirea Vardar Skopje - S.C. Pick Szeged, care s-a încheiat la egalitate, 24 - 24.
I s-a decernat Premiul Nobel pentru literatură pe anul 1959.
Example annotation format:
T1 ORDINAL 21 26 a 2-a
T2 ORGANIZATION 50 63 Vardar Skopje
T3 ORGANIZATION 66 82 S.C. Pick Szeged
T4 NUMERIC_VALUE 116 118 24
T5 NUMERIC_VALUE 121 123 24
T6 DATETIME 175 184 anul 1959
<<</BRAT format>>>
<<<CoNLL-U Plus format>>>
The CoNLL-U Plus format extends the standard CoNLL-U which is used to annotate sentences, and in which many corpora are found today. The CoNLL-U format annotates one word per line with 10 distinct "columns" (tab separated):
nolistsep
ID: word index;
FORM: unmodified word from the sentence;
LEMMA: the word's lemma;
UPOS: Universal part-of-speech tag;
XPOS: Language-specific part-of-speech tag;
FEATS: List of morphological features from the universal feature inventory or from a defined language-specific extension;
HEAD: Head of the current word, which is either a value of ID or zero;
DEPREL: Universal dependency relation to the HEAD or a defined language-specific subtype of one;
DEPS: Enhanced dependency graph in the form of a list of head-deprel pairs;
MISC: Miscellaneous annotations such as space after word.
The CoNLL-U Plus extends this format by allowing a variable number of columns, with the restriction that the columns are to be defined in the header. For RONEC, we define our CoNLL-U Plus format as the standard 10 columns plus another extra column named RONEC:CLASS. This column has the following format: nolistsep
[noitemsep]
each named entity has a distinct id in the sentence, starting from 1; as an entity can span several words, all words that belong to it have the same id (no relation to word indexes)
the first word belonging to an entity also contains its class (e.g. word "John" in entity "John Smith" will be marked as "1:PERSON")
a non-entity word is marked with an asterisk *
Table TABREF37 shows the CoNLL-U Plus format where for example "a 2-a" is an ORDINAL entity spanning 3 words. The first word "a" is marked in this last column as "1:ORDINAL" while the following words just with the id "1".
The CoNLL-U Plus format we provide was created as follows: (1) annotate the raw sentences using the NLP-Cube tool for Romanian (it provides everything from tokenization to parsing, filling in all attributes in columns #1-#10; (2) align each token with the human-made entity annotations from the BRAT environment (the alignment is done automatically and is error-free) and fill in column #11.
<<</CoNLL-U Plus format>>>
<<</Corpus Description>>>
<<<Classes and Annotation Methodology>>>
For the English language, we found two "categories" of NER annotations to be more prominent: CoNLL- and ACE-style. Because CoNLL only annotates a few classes (depending on the corpora, starting from the basic three: PERSON, ORGANIZATION and LOCATION, up to seven), we chose to follow the ACE-style with 18 different classes. After analyzing the ACE guide we have settled on 16 final classes that seemed more appropriate for Romanian, seen in table TABREF18.
In the following sub-sections we will describe each class in turn, with a few examples. Some examples have been left in Romanian while some have been translated in English for the reader's convenience. In the examples at the end of each class' description, translations in English are colored for easier reading.
<<<PERSON>>>
Persons, including fictive characters. We also mark common nouns that refer to a person (or several), including pronouns (us, them, they), but not articles (e.g. in "an individual" we don't mark "an"). Positions are not marked unless they directly refer to the person: "The presidential counselor has advised ... that a new counselor position is open.", here we mark "presidential counselor" because it refers to a person and not the "counselor" at the end of the sentence as it refers only to a position.
Locul doi i-a revenit româncei Otilia Aionesei, o elevă de 17 ani.
green!55!blueThe second place was won by Otilia Aionesei, a 17 year old student.
Ministrul bulgar pentru afaceri europene, Meglena Kuneva ...
green!55!blueThe Bulgarian Minister for European Affairs, Meglena Kuneva ...
<<</PERSON>>>
<<<NAT_REL_POL>>>
These are nationalities or religious or political groups. We include words that indicate the nationality of a person, group or product/object. Generally words marked as NAT_REl_POL are adjectives.
avionul american
green!55!bluethe American airplane
Grupul olandez
green!55!bluethe Dutch group
Grecii iși vor alege președintele.
green!55!blueThe Greeks will elect their president.
<<</NAT_REL_POL>>>
<<<ORGANIZATION>>>
Companies, agencies, institutions, sports teams, groups of people. These entities must have an organizational structure. We only mark full organizational entities, not fragments, divisions or sub-structures.
Universitatea Politehnica București a decis ...
green!55!blueThe Politehnic University of Bucharest has decided ...
Adobe Inc. a lansat un nou produs.
green!55!blueAdobe Inc. has launched a new product.
<<</ORGANIZATION>>>
<<<GPE>>>
Geo-political entities: countries, counties, cities, villages. GPE entities have all of the following components: (1) a population, (2) a well-defined governing/organizing structure and (3) a physical location. GPE entities are not sub-entities (like a neighbourhood from a city).
Armin van Buuren s-a născut în Leiden.
green!55!blueArmin van Buuren was born in Leiden.
U.S.A. ramane indiferentă amenințărilor Coreei de Nord.
green!55!blueU.S.A. remains indifferent to North Korea's threats.
<<</GPE>>>
<<<LOC>>>
Non-geo-political locations: mountains, seas, lakes, streets, neighbourhoods, addresses, continents, regions that are not GPEs. We include regions such as Middle East, "continents" like Central America or East Europe. Such regions include multiple countries, each with its own government and thus cannot be GPEs.
Pe DN7 Petroșani-Obârșia Lotrului carosabilul era umed, acoperit (cca 1 cm) cu zăpadă, iar de la Obârșia Lotrului la stațiunea Vidra, stratul de zăpadă era de 5-6 cm.
green!55!blueOn DN7 Petroșani-Obârșia Lotrului the road was wet, covered (about 1cm) with snow, and from Obârșia Lotrului to Vidra resort the snow depth was around 5-6 cm.
Produsele comercializate în Europa de Est au o calitate inferioară celor din vest.
green!55!blueProducts sold in East Europe have a lower quality than those sold in the west.
<<</LOC>>>
<<<FACILITY>>>
Buildings, airports, highways, bridges or other functional structures built by humans. Buildings or other structures which house people, such as homes, factories, stadiums, office buildings, prisons, museums, tunnels, train stations, etc., named or not. Everything that falls within the architectural and civil engineering domains should be labeled as a FACILITY. We do not mark structures composed of multiple (and distinct) sub-structures, like a named area that is composed of several buildings, or "micro"-structures such as an apartment (as it a unit of an apartment building). However, larger, named functional structures can still be marked (such as "terminal X" of an airport).
Autostrada A2 a intrat în reparații pe o bandă, însă pe A1 nu au fost încă începute lucrările.
green!55!blueRepairs on one lane have commenced on the A2 highway, while on A1 no works have started yet.
Aeroportul Henri Coandă ar putea sa fie extins cu un nou terminal.
green!55!blueHenri Coandă Airport could be extended with a new terminal.
<<</FACILITY>>>
<<<PRODUCT>>>
Objects, cars, food, items, anything that is a product, including software (such as Photoshop, Word, etc.). We don't mark services or processes. With very few exceptions (such as software products), PRODUCT entities have to have physical form, be directly man-made. We don't mark entities such as credit cards, written proofs, etc. We don't include the producer's name unless it's embedded in the name of the product.
Mașina cumpărată este o Mazda.
green!55!blueThe bought car is a Mazda.
S-au cumpărat 5 Ford Taurus și 2 autobuze Volvo.
green!55!blue5 Ford Taurus and 2 Volvo buses have been acquired.
<<</PRODUCT>>>
<<<EVENT>>>
Named events: Storms (e.g.:"Sandy"), battles, wars, sports events, etc. We don't mark sports teams (they are ORGs), matches (e.g. "Steaua-Rapid" will be marked as two separate ORGs even if they refer to a football match between the two teams, but the match is not specific). Events have to be significant, with at least national impact, not local.
Războiul cel Mare, Războiul Națiunilor, denumit, în timpul celui de Al Doilea Război Mondial, Primul Război Mondial, a fost un conflict militar de dimensiuni mondiale.
green!55!blueThe Great War, War of the Nations, as it was called during the Second World War, the First World War was a global-scale military conflict.
<<</EVENT>>>
<<<LANGUAGE>>>
This class represents all languages.
Românii din România vorbesc română.
green!55!blueRomanians from Romania speak Romanian.
În Moldova se vorbește rusa și româna.
green!55!blueIn Moldavia they speak Russian and Romanian.
<<</LANGUAGE>>>
<<<WORK_OF_ART>>>
Books, songs, TV shows, pictures; everything that is a work of art/culture created by humans. We mark just their name. We don't mark laws.
Accesul la Mona Lisa a fost temporar interzis vizitatorilor.
green!55!blueAccess to Mona Lisa was temporarily forbidden to visitors.
În această seară la Vrei sa Fii Miliardar vom avea un invitat special.
green!55!blueThis evening in Who Wants To Be A Millionaire we will have a special guest.
<<</WORK_OF_ART>>>
<<<DATETIME>>>
Date and time values. We will mark full constructions, not parts, if they refer to the same moment (e.g. a comma separates two distinct DATETIME entities only if they refer to distinct moments). If we have a well specified period (e.g. "between 20-22 hours") we mark it as PERIOD, otherwise less well defined periods are marked as DATETIME (e.g.: "last summer", "September", "Wednesday", "three days"); Ages are marked as DATETIME as well. Prepositions are not included.
Te rog să vii aici în cel mult o oră, nu mâine sau poimâine.
green!55!bluePlease come here in one hour at most, not tomorrow or the next day.
Actul s-a semnat la orele 16.
green!55!blueThe paper was signed at 16 hours.
August este o lună secetoasă.
green!55!blueAugust is a dry month.
Pe data de 20 martie între orele 20-22 va fi oprită alimentarea cu curent.
green!55!blueOn the 20th of March, between 20-22 hours, electricity will be cut-off.
<<</DATETIME>>>
<<<PERIOD>>>
Periods/time intervals. Periods have to be very well marked in text. If a period is not like "a-b" then it is a DATETIME.
Spectacolul are loc între 1 și 3 Aprilie.
green!55!blueThe show takes place between 1 and 3 April.
În prima jumătate a lunii iunie va avea loc evenimentul de două zile.
green!55!blueIn the first half of June the two-day event will take place.
<<</PERIOD>>>
<<<MONEY>>>
Money, monetary values, including units (e.g. USD, $, RON, lei, francs, pounds, Euro, etc.) written with number or letters. Entities that contain any monetary reference, including measuring units, will be marked as MONEY (e.g. 10$/sqm, 50 lei per hour). Words that are not clear values will not be marked, such as "an amount of money", "he received a coin".
Primarul a semnat un contract în valoare de 10 milioane lei noi, echivalentul a aproape 2.6m EUR.
green!55!blueThe mayor signed a contract worth 10 million new lei, equivalent of almost 2.6m EUR.
<<</MONEY>>>
<<<QUANTITY>>>
Measurements, such as weight, distance, etc. Any type of quantity belongs in this class.
Conducătorul auto avea peste 1g/ml alcool în sânge, fiind oprit deoarece a fost prins cu peste 120 km/h în localitate.
green!55!blueThe car driver had over 1g/ml blood alcohol, and was stopped because he was caught speeding with over 120km/h in the city.
<<</QUANTITY>>>
<<<NUMERIC_VALUE>>>
Any numeric value (including phone numbers), written with letters or numbers or as percents, which is not MONEY, QUANTITY or ORDINAL.
Raportul XII-2 arată 4 552 de investitori, iar structura de portofoliu este: cont curent 0,05%, certificate de trezorerie 66,96%, depozite bancare 13,53%, obligațiuni municipale 19,46%.
green!55!blueThe XII-2 report shows 4 552 investors, and the portfolio structure is: current account 0,05%, treasury bonds 66,96%, bank deposits 13,53%, municipal bonds 19,46%.
<<</NUMERIC_VALUE>>>
<<<ORDINAL>>>
The first, the second, last, 30th, etc.; An ordinal must imply an order relation between elements. For example, "second grade" does not involve a direct order relation; it indicates just a succession in grades in a school system.
Primul loc a fost ocupat de echipa Germaniei.
green!55!blueThe first place was won by Germany's team.
The corpus creation process involved a small number of people that have voluntarily joined the initiative, with the authors of this paper directing the work. Initially, we searched for NER resources in Romanian, and found none. Then we looked at English resources and read the in-depth ACE guide, out of which a 16-class draft evolved. We then identified a copy-right free text from which we hand-picked sentences to maximize the amount of entities while maintaining style balance. The annotation process was a trial-and-error, with cycles composed of annotation, discussing confusing entities, updating the annotation guide schematic and going through the corpus section again to correct entities following guide changes. The annotation process was done online, in BRAT. The actual annotation involved 4 people, has taken about 6 months (as work was volunteer-based, we could not have reached for 100% time commitment from the people involved), and followed the steps:
nolistsep
Each person would annotate the full corpus (this included the cycles of shaping up the annotation guide, and re-annotation). Inter-annotator agreement (ITA) at this point was relatively low, at 60-70%, especially for a number of classes.
We then automatically merged all annotations, with the following criterion: if 3 of the 4 annotators agreed on an entity (class&start-stop), then it would go unchanged; otherwise mark the entity (longest span) as CONFLICTED.
Two teams were created, each with two persons. Each team annotated the full corpus again, starting from the previous step. At this point, class-average ITA has risen to over 85%.
Next, the same automatic merging happened, this time entities remained unchanged if both annotations agreed.
Finally, one of the authors went through the full corpus one more time, correcting disagreements.
We would like to make a few notes regarding classes and inter-annotator agreements:
nolistsep
[noitemsep]
Classes like ORGANIZATION, NAT_REL_POL, LANGUAGE or GPEs have the highest ITA, over 98%. They are pretty clear and distinct from other classes.
The DATETIME class also has a high ITA, with some overlap with PERIOD: annotators could fall-back if they were not sure that an expression was a PERIOD and simply mark it as DATETIME.
WORK_OF_ART and EVENTs have caused some problems because the scope could not be properly defined from just one sentence. For example, a fair in a city could be a local event, but could also be a national periodic event.
MONEY, QUANTITY and ORDINAL all are more specific classes than NUMERIC_VALUE. So, in cases where a numeric value has a unit of measure by it, it should become a QUANTITY, not a NUMERIC_VALUE. However, this "specificity" has created some confusion between these classes, just like with DATETIME and PERIOD.
The ORDINAL class is a bit ambiguous, because, even though it ranks "higher" than NUMERIC_VALUE, it is the least diverse, most of the entities following the same patterns.
PRODUCT and FACILITY classes have the lowest ITA by far (less than 40% in the first annotation cycle, less than 70% in the second). We actually considered removing these classes from the annotation process, but to try to mimic the OntoNotes classes as much as possible we decided to keep them in. There were many cases where the annotators disagreed about the scope of words being facilities or products. Even in the ACE guidelines these two classes are not very well "documented" with examples of what is and what is not a PRODUCT or FACILITY. Considering that these classes are, in our opinion, of the lowest importance among all the classes, a lower ITA was accepted.
Finally, we would like to address the "semantic scope" of the entities - for example, for class PERSON, we do not annotate only proper nouns (NPs) but basically any reference to a person (e.g. through pronouns "she", job position titles, common nouns such as "father", etc.). We do this because we would like a high-coverage corpus, where entities are marked as more semantically-oriented rather than syntactically - in the same way ACE entities are more encompassing than CoNLL entities. We note that, for example, if one would like strict proper noun entities, it is very easy to extract from a PERSON multi-word entity only those words which are syntactically marked (by any tagger) as NPs.
<<</ORDINAL>>>
<<</Classes and Annotation Methodology>>>
<<<Conclusions>>>
We have presented RONEC - the first Named Entity Corpus for the Romanian language. At its current version, in its 5127 sentences we have 26377 annotated entities in 16 different classes. The corpus is based on copy-right free text, and is released as open-source, free to use and extend.
We hope that in time this corpus will grow in size and mature towards a strong resource for Romanian. For this to happen we have released the corpus in two formats: CoNLL-U PLus, which is a text-based tab-separated pre-tokenized and annotated format that is simple to use, and BRAT, which is practically plug-and-play into the BRAT web annotation tool where anybody can add and annotate new sentences. Also, in the GitHub repo there are automatic alignment and conversion script to and from the two formats so they could easily be exported between.
Finally, we have also provided an annotation guide that we will improve, and in time evolve into a full annotation document like the ACE Annotation Guidelines for Entities V6.6 BIBREF8.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Conclusions, Classes and Annotation Methodology"
],
"type": "disordered_section"
}
|
1912.01220
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Modelling Semantic Categories using Conceptual Neighborhood
<<<Abstract>>>
While many methods for learning vector space embeddings have been proposed in the field of Natural Language Processing, these methods typically do not distinguish between categories and individuals. Intuitively, if individuals are represented as vectors, we can think of categories as (soft) regions in the embedding space. Unfortunately, meaningful regions can be difficult to estimate, especially since we often have few examples of individuals that belong to a given category. To address this issue, we rely on the fact that different categories are often highly interdependent. In particular, categories often have conceptual neighbors, which are disjoint from but closely related to the given category (e.g.\ fruit and vegetable). Our hypothesis is that more accurate category representations can be learned by relying on the assumption that the regions representing such conceptual neighbors should be adjacent in the embedding space. We propose a simple method for identifying conceptual neighbors and then show that incorporating these conceptual neighbors indeed leads to more accurate region based representations.
<<</Abstract>>>
<<<Introduction>>>
Vector space embeddings are commonly used to represent entities in fields such as machine learning (ML) BIBREF0, natural language processing (NLP) BIBREF1, information retrieval (IR) BIBREF2 and cognitive science BIBREF3. An important point, however, is that such representations usually represent both individuals and categories as vectors BIBREF4, BIBREF5, BIBREF6. Note that in this paper, we use the term category to denote natural groupings of individuals, as it is used in cognitive science, with individuals referring to the objects from the considered domain of discourse. For example, the individuals carrot and cucumber belong to the vegetable category. We use the term entities as an umbrella term covering both individuals and categories.
Given that a category corresponds to a set of individuals (i.e. its instances), modelling them as (possibly imprecise) regions in the embedding space seems more natural than using vectors. In fact, it has been shown that the vector representations of individuals that belong to the same category are indeed often clustered together in learned vector space embeddings BIBREF7, BIBREF8. The view of categories being regions is also common in cognitive science BIBREF3. However, learning region representations of categories is a challenging problem, because we typically only have a handful of examples of individuals that belong to a given category. One common assumption is that natural categories can be modelled using convex regions BIBREF3, which simplifies the estimation problem. For instance, based on this assumption, BIBREF9 modelled categories using Gaussian distributions and showed that these distributions can be used for knowledge base completion. Unfortunately, this strategy still requires a relatively high number of training examples to be successful.
However, when learning categories, humans do not only rely on examples. For instance, there is evidence that when learning the meaning of nouns, children rely on the default assumption that these nouns denote mutually exclusive categories BIBREF10. In this paper, we will in particular take advantage of the fact that many natural categories are organized into so-called contrast sets BIBREF11. These are sets of closely related categories which exhaustively cover some sub-domain, and which are assumed to be mutually exclusive; e.g. the set of all common color names, the set $\lbrace \text{fruit},\text{vegetable}\rbrace $ or the set $\lbrace \text{NLP}, \text{IR}, \text{ML}\rbrace $. Categories from the same contrast set often compete for coverage. For instance, we can think of the NLP domain as consisting of research topics that involve processing textual information which are not covered by the IR and ML domains. Categories which compete for coverage in this way are known as conceptual neighbors BIBREF12; e.g. NLP and IR, red and orange, fruit and vegetable. Note that the exact boundary between two conceptual neighbors may be vague (e.g. tomato can be classified as fruit or as vegetable).
In this paper, we propose a method for learning region representations of categories which takes advantage of conceptual neighborhood, especially in scenarios where the number of available training examples is small. The main idea is illustrated in Figure FIGREF2, which depicts a situation where we are given some examples of a target category $C$ as well as some related categories $N_1,N_2,N_3,N_4$. If we have to estimate a region from the examples of $C$ alone, the small elliptical region shown in red would be a reasonable choice. More generally, a standard approach would be to estimate a Gaussian distribution from the given examples. However, vector space embeddings typically have hundreds of dimensions, while the number of known examples of the target category is often far lower (e.g. 2 or 3). In such settings we will almost inevitably underestimate the coverage of the category. However, in the example from Figure FIGREF2, if we take into account the knowledge that $N_1,N_2,N_3,N_4$ are conceptual neighbors of $C$, the much larger, shaded region becomes a more natural choice for representing $C$. Indeed, the fact that e.g. $C$ and $N_1$ are conceptual neighbors suggests that any point in between the examples of these categories needs to be contained either in the region representing $C$ or the region representing $N_1$. In the spirit of prototype approaches to categorization BIBREF13, without any further information it makes sense to assume that their boundary is more or less half-way in between the known examples.
The contribution of this paper is two-fold. First, we propose a method for identifying conceptual neighbors from text corpora. We essentially treat this problem as a standard text classification problem, by relying on categories with large numbers of training examples to generate a suitable distant supervision signal. Second, we show that the predicted conceptual neighbors can effectively be used to learn better category representations.
<<</Introduction>>>
<<<Related Work>>>
In distributional semantics, categories are frequently modelled as vectors. For example, BIBREF14 study the problem of deciding for a word pair $(i,c)$ whether $i$ denotes an instance of the category $c$, which they refer to as instantiation. They treat this problem as a binary classification problem, where e.g. the pair (AAAI, conference) would be a positive example, while (conference, AAAI) and (New York, conference) would be negative examples. Different from our setting, their aim is thus essentially to model the instantiation relation itself, similar in spirit to how hypernymy has been modelled in NLP BIBREF15, BIBREF16. To predict instantiation, they use a simple neural network model which takes as input the word vectors of the input pair $(i,c)$. They also experiment with an approach that instead models a given category as the average of the word vectors of its known instances and found that this led to better results.
A few authors have already considered the problem of learning region representations of categories. Most closely related, BIBREF17 model ontology concepts using Gaussian distributions. In BIBREF18 DBLP:conf/ecai/JameelS16, a model is presented which embeds Wikipedia entities such that entities which have the same WikiData type are characterized by some region within a low-dimensional subspace of the embedding. Within the context of knowledge graph embedding, several approaches have been proposed that essentially model semantic types as regions BIBREF19, BIBREF20. A few approaches have also been proposed for modelling word meaning using regions BIBREF21, BIBREF22 or Gaussian distributions BIBREF23. Along similar lines, several authors have proposed approaches inspired by probabilistic topic modelling, which model latent topics using Gaussians BIBREF24 or related distributions BIBREF25.
On the other hand, the notion of conceptual neighborhood has been covered in most detail in the field of spatial cognition, starting with the influential work of BIBREF12. In computational linguistics, moreover, this representation framework aligns with lexical semantics traditions where word meaning is constructed in terms of semantic decomposition, i.e. lexical items being minimally decomposed into structured forms (or templates) rather than sets of features BIBREF26, effectively mimicking a sort of conceptual neighbourhood. In Pustejovsky's generative lexicon, a set of “semantic devices” is proposed such that they behave in semantics similarly as grammars do in syntax. Specifically, this framework considers the qualia structure of a lexical unit as a set of expressive semantic distinctions, the most relevant for our purposes being the so-called formal role, which is defined as “that which distinguishes the object within a larger domain”, e.g. shape or color. This semantic interplay between cognitive science and computational linguistics gave way to the term lexical coherence, which has been used for contextualizing the meaning of words in terms of how they relate to their conceptual neighbors BIBREF27, or by providing expressive lexical semantic resources in the form of ontologies BIBREF28.
<<</Related Work>>>
<<<Model Description>>>
Our aim is to introduce a model for learning region-based category representations which can take advantage of knowledge about the conceptual neighborhood of that category. Throughout the paper, we focus in particular on modelling categories from the BabelNet taxonomy BIBREF29, although the proposed method can be applied to any resource which (i) organizes categories in a taxonomy and (ii) provides examples of individuals that belong to these categories. Selecting BabelNet as our use case is a natural choice, however, given its large scale and the fact that it integrates many lexical and ontological resources.
As the possible conceptual neighbors of a given BabelNet category $C$, we consider all its siblings in the taxonomy, i.e. all categories $C_1,...,C_k$ which share a direct parent with $C$. To select which of these siblings are most likely to be conceptual neighbors, we look at mentions of these categories in a text corpus. As an illustrative example, consider the pair (hamlet,village) and the following sentence:
In British geography, a hamlet is considered smaller than a village and ...
From this sentence, we can derive that hamlet and village are disjoint but closely related categories, thus suggesting that they are conceptual neighbors. However, training a classifier that can identify conceptual neighbors from such sentences is complicated by the fact that conceptual neighborhood is not covered in any existing lexical resource, to the best of our knowledge, which means that large sets of training examples are not readily available. To address this lack of training data, we rely on a distant supervision strategy. The central insight is that for categories with a large number of known instances, we can use the embeddings of these instances to check whether two categories are conceptual neighbors. In particular, our approach involves the following three steps:
Identify pairs of categories that are likely to be conceptual neighbors, based on the vector representations of their known instances.
Use the pairs from Step 1 to train a classifier that can recognize sentences which indicate that two categories are conceptual neighbors.
Use the classifier from Step 2 to predict which pairs of BabelNet categories are conceptual neighbors and use these predictions to learn category representations.
Note that in Step 1 we can only consider BabelNet categories with a large number of instances, while the end result in Step 3 is that we can predict conceptual neighborhood for categories with only few known instances. We now discuss the three aforementioned steps one by one.
<<<Step 1: Predicting Conceptual Neighborhood from Embeddings>>>
Our aim here is to generate distant supervision labels for pairs of categories, indicating whether they are likely to be conceptual neighbors. These labels will then be used in Section SECREF12 to train a classifier for predicting conceptual neighborhood from text.
Let $A$ and $B$ be siblings in the BabelNet taxonomy. If enough examples of individuals belonging to these categories are provided in BabelNet, we can use these instances to estimate high-quality representations of $A$ and $B$, and thus estimate whether they are likely to be conceptual neighbors. In particular, we split the known instances of $A$ into a training set $I^A_{\textit {train}}$ and test set $I^A_{\textit {test}}$, and similar for $B$. We then train two types of classifiers. The first classifier estimates a Gaussian distribution for each category, using the training instances in $I^A_{\textit {train}}$ and $I^B_{\textit {train}}$ respectively. This should provide us with a reasonable representation of $A$ and $B$ regardless of whether they are conceptual neighbors. In the second approach, we first learn a Gaussian distribution from the joint set of training examples $I^A_{\textit {train}} \cup I^B_{\textit {train}}$ and then train a logistic regression classifier to separate instances from $A$ and $B$. In particular, note that in this way, we directly impose the requirement that the regions modelling $A$ and $B$ are adjacent in the embedding space (intuitively corresponding to two halves of a Gaussian distribution). We can thus expect that the second approach should lead to better predictions than the first approach if $A$ and $B$ are conceptual neighbors and to worse predictions if they are not. In particular, we propose to use the relative performance of the two classifiers as the required distant supervision signal for predicting conceptual neighborhood.
We now describe the two classification models in more detail, after which we explain how these models are used to generate the distant supervision labels.
Gaussian Classifier The first classifier follows the basic approach from BIBREF17, where Gaussian distributions were similarly used to model WikiData categories. In particular, we estimate the probability that an individual $e$ with vector representation $\mathbf {e}$ is an instance of the category $A$ as follows:
where $\lambda _A$ is the prior probability of belonging to category $A$, the likelihood $f(\mathbf {e} | A)$ is modelled as a Gaussian distribution and $f(\mathbf {e})$ will also be modelled as a Gaussian distribution. Intuitively, we think of the Gaussian $f(. | A)$ as defining a soft region, modelling the category $A$. Given the high-dimensional nature of typical vector space embeddings, we use a mean field approximation:
Where $d$ is the number of dimensions in the vector space embedding, $e_i$ is the $i^{\textit {th}}$ coordinate of $\mathbf {e}$, and $f_i(. | A)$ is a univariate Gaussian. To estimate the parameters $\mu _i$ and $\sigma _i^2$ of this Gaussian, we use a Bayesian approach with a flat prior:
where $G(e_i;\mu _i,\sigma _i^2)$ represents the Gaussian distribution with mean $\mu _i$ and variance $\sigma _i^2$ and NI$\chi ^{2}$ is the normal inverse-$\chi ^{2}$ distribution. In other words, instead of using a single estimate of the mean $\mu $ and variance $\sigma _2$ we average over all plausible choices of these parameters. The use of the normal inverse-$\chi ^{2}$ distribution for the prior on $\mu _i$ and $\sigma _i^2$ is a common choice, which has the advantage that the above integral simplifies to a Student-t distribution. In particular, we have:
where we assume $I^A_{\textit {train}}= \lbrace a_1,...,a_n\rbrace $, $a_i^j$ denotes the $i^{\textit {th}}$ coordinate of the vector embedding of $a_j$, $\overline{x_i} = \frac{1}{n}\sum _{j=1}^n a_i^j$ and $t_{n-1}$ is the Student t-distribution with $n-1$ degrees of freedom. The probability $f(\mathbf {e})$ is estimated in a similar way, but using all BabelNet instances. The prior $\lambda _A$ is tuned based on a validation set. Finally, we classify $e$ as a positive example if $P(A|\mathbf {e}) > 0.5$.
GLR Classifier. We first train a Gaussian classifier as in Section UNKREF9, but now using the training instances of both $A$ and $B$. Let us denote the probability predicted by this classifier as $P(A\cup B | \textbf {e})$. The intuition is that entities for which this probability is high should either be instances of $A$ or of $B$, provided that $A$ and $B$ are conceptual neighbors. If, on the other hand, $A$ and $B$ are not conceptual neighbors, relying on this assumption is likely to lead to errors (i.e. there may be individuals whose representation is in between $A$ and $B$ which are not instances of either), which is what we need for generating the distant supervision labels. If $P(A\cup B | \textbf {e}) > 0.5$, we assume that $e$ either belongs to $A$ or to $B$. To distinguish between these two cases, we train a logistic regression classifier, using the instances from $I^A_{\textit {train}}$ as positive examples and those from $I^B_{\textit {train}}$ as negative examples. Putting everything together, we thus classify $e$ as a positive example for $A$ if $P(A\cup B | \textbf {e})>0.5$ and $e$ is classified as a positive example by the logistic regression classifier. Similarly, we classfiy $e$ as a positive example for $B$ if $P(A\cup B | \textbf {e})>0.5$ and $e$ is classified as a negative example by the logistic regression classifier. We will refer to this classification model as GLR (Gaussian Logistic Regression).
<<<Generating Distant Supervision Labels>>>
To generate the distant supervision labels, we consider a ternary classification problem for each pair of siblings $A$ and $B$. In particular, the task is to decide for a given individual $e$ whether it is an instance of $A$, an instance of $B$, or an instance of neither (where only disjoint pairs $A$ and $B$ are considered). For the Gaussian classifier, we predict $A$ iff $P(A|\mathbf {e})>0.5$ and $P(A|\mathbf {e}) > P(B|\mathbf {e})$. For the GLR classifier, we predict $A$ if $P(A\cup B|\mathbf {e}) >0.5$ and the associated logistic regression classifier predicts $A$. The condition for predicting $B$ is analogous. The test examples for this ternary classification problem consist of the elements from $I^A_{\textit {test}}$ and $I^B_{\textit {test}}$, as well as some negative examples (i.e. individuals that are neither instances of $A$ nor $B$). To select these negative examples, we first sample instances from categories that have the same parent as $A$ and $B$, choosing as many such negative examples as we have positive examples. Second, we also sample the same number of negative examples from randomly selected categories in the taxonomy.
Let $F^1_{AB}$ be the F1 score achieved by the Gaussian classifier and $F^2_{AB}$ the F1 score of the GLR classifier. Our hypothesis is that $F^1_{AB} \ll F^2_{AB}$ suggests that $A$ and $B$ are conceptual neighbors, while $F^1_{AB} \gg F^2_{AB}$ suggests that they are not. This intuition is captured in the following score:
where we consider $A$ and $B$ to be conceptual neighbors if $s_{AB}\gg 0.5$.
<<</Generating Distant Supervision Labels>>>
<<</Step 1: Predicting Conceptual Neighborhood from Embeddings>>>
<<<Step 2: Predicting Conceptual Neighborhood from Text>>>
We now consider the following problem: given two BabelNet categories $A$ and $B$, predict whether they are likely to be conceptual neighbors based on the sentences from a text corpus in which they are both mentioned. To train such a classifier, we use the distant supervision labels from Section SECREF8 as training data. Once this classifier has been trained, we can then use it to predict conceptual neighborhood for categories for which only few instances are known.
To find sentences in which both $A$ and $B$ are mentioned, we rely on a disambiguated text corpus in which mentions of BabelNet categories are explicitly tagged. Such a disambiguated corpus can be automatically constructed, using methods such as the one proposed by BIBREF30 mancini-etal-2017-embedding, for instance. For each pair of candidate categories, we thus retrieve all sentences where they co-occur. Next, we represent each extracted sentence as a vector. To this end, we considered two possible strategies:
Word embedding averaging: We compute a sentence embedding by simply averaging the word embeddings of each word within the sentence. Despite its simplicity, this approach has been shown to provide competitive results BIBREF31, in line with more expensive and sophisticated methods e.g. based on LSTMs.
Contextualized word embeddings: The recently proposed contextualized embeddings BIBREF32, BIBREF33 have already proven successful in a wide range of NLP tasks. Instead of providing a single vector representation for all words irrespective of the context, contextualized embeddings predict a representation for each word occurrence which depends on its context. These representations are usually based on pre-trained language models. In our setting, we extract the contextualized embeddings for the two candidate categories within the sentence. To obtain this contextualized embedding, we used the last layer of the pre-trained language model, which has been shown to be most suitable for capturing semantic information BIBREF34, BIBREF35. We then use the concatenation of these two contextualized embeddings as the representation of the sentence.
For both strategies, we average their corresponding sentence-level representations across all sentences in which the same two candidate categories are mentioned. Finally, we train an SVM classifier on the resulting vectors to predict for the pair of siblings $(A,B)$ whether $s_{AB}> 0.5$ holds.
<<</Step 2: Predicting Conceptual Neighborhood from Text>>>
<<<Step 3: Category Induction>>>
Let $C$ be a category and assume that $N_1,...,N_k$ are conceptual neighbors of this category. Then we can model $C$ by generalizing the idea underpinning the GLR classifier. In particular, we first learn a Gaussian distribution from all the instances of $C$ and $N_1,...,N_k$. This Gaussian model allows us to estimate the probability $P(C\cup N_1\cup ...\cup N_k \,|\, \mathbf {e})$ that $e$ belongs to one of $C,N_1,...,N_k$. If this probability is sufficiently high (i.e. higher than 0.5), we use a multinomial logistic regression classifier to decide which of these categories $e$ is most likely to belong to. Geometrically, we can think of the Gaussian model as capturing the relevant local domain, while the multinomial logistic regression model carves up this local domain, similar as in Figure FIGREF2.
In practice, we do not know with certainty which categories are conceptual neighbors of $C$. Instead, we select the $k$ categories (for some fixed constant $k$), among all the siblings of $C$, which are most likely to be conceptual neighbors, according to the text classifier from Section SECREF12.
<<</Step 3: Category Induction>>>
<<</Model Description>>>
<<<Experiments>>>
The central problem we consider is category induction: given some instances of a category, predict which other individuals are likely to be instances of that category. When enough instances are given, standard approaches such as the Gaussian classifier from Section UNKREF9, or even a simple SVM classifier, can perform well on this task. For many categories, however, we only have access to a few instances, either because the considered ontology is highly incomplete or because the considered category only has few actual instances. The main research question which we want to analyze is whether (predicted) conceptual neighborhood can help to obtain better category induction models in such cases. In Section SECREF16, we first provide more details about the experimental setting that we followed. Section SECREF23 then discusses our main quantitative results. Finally, in Section SECREF26 we present a qualitative analysis.
<<<Experimental setting>>>
<<<Taxonomy>>>
As explained in Section SECREF3, we used BabelNet BIBREF29 as our reference taxonomy. BabelNet is a large-scale full-fledged taxonomy consisting of heterogeneous sources such as WordNet BIBREF36, Wikidata BIBREF37 and WiBi BIBREF38, making it suitable to test our hypothesis in a general setting.
Vector space embeddings. Both the distant labelling method from Section SECREF8 and the category induction model itself need access to vector representations of the considered instances. To this end, we used the NASARI vectors, which have been learned from Wikipedia and are already linked to BabelNet BIBREF1.
BabelNet category selection. To test our proposed category induction model, we consider all BabelNet categories with fewer than 50 known instances. This is motivated by the view that conceptual neighborhood is mostly useful in cases where the number of known instances is small. For each of these categories, we split the set of known instances into 90% for training and 10% for testing. To tune the prior probability $\lambda _A$ for these categories, we hold out 10% from the training set as a validation set.
The conceptual neighbors among the considered test categories are predicted using the classifier from Section SECREF12. To obtain the distant supervision labels needed to train that classifier, we consider all BabelNet categories with at least 50 instances. This ensures that the distant supervision labels are sufficiently accurate and that there is no overlap with the categories which are used for evaluating the model.
Text classifier training. As the text corpus to extract sentences for category pairs we used the English Wikipedia. In particular, we used the dump of November 2014, for which a disambiguated version is available online. This disambiguated version was constructed using the shallow disambiguation algorithm of BIBREF30 mancini-etal-2017-embedding. As explained in Section SECREF12, for each pair of categories we extracted all the sentences where they co-occur, including a maximum window size of 10 tokens between their occurrences, and 10 tokens to the left and right of the first and second category within the sentence, respectively. For the averaging-based sentence representations we used the 300-dimensional pre-trained GloVe word embeddings BIBREF39. To obtain the contextualized representations we used the pre-trained 768-dimensional BERT-base model BIBREF33..
The text classifier is trained on 3,552 categories which co-occur at least once in the same sentence in the Wikipedia corpus, using the corresponding scores $s_{AB}$ as the supervision signal (see Section SECREF12). To inspect how well conceptual neighborhood can be predicted from text, we performed a 10-fold cross validation over the training data, removing for this experiment the unclear cases (i.e., those category pairs with $s_{AB}$ scores between $0.4$ and $0.6$). We also considered a simple baselineWE based on the number of co-occurring sentences for each pairs, which we might expect to be a reasonably strong indicator of conceptual neighborhood, i.e. the more often two categories are mentiond in the same sentence, the more likely that they are conceptual neighbors. The results for this cross-validation experiment are summarized in Table TABREF22. Surprisingly, perhaps, the word vector averaging method seems more robust overall, while being considerably faster than the method using BERT. The results also confirm the intuition that the number of co-occurring sentences is positively correlated with conceptual neighborhood, although the results for this baseline are clearly weaker than those for the proposed classifiers.
Baselines. To put the performance of our model in perspective, we consider three baseline methods for category induction. First, we consider the performance of the Gaussian classifier from Section UNKREF9, as a representative example of how well we can model each category when only considering their given instances; this model will be referred to as Gauss. Second, we consider a variant of the proposed model in which we assume that all siblings of the category are conceptual neighbors; this model will be referred to as Multi. Third, we consider a variant of our model in which the neighbors are selected based on similarity. To this end, we represent each BabelNet as their vector from the NASARI space. From the set of siblings of the target category $C$, we then select the $k$ categories whose vector representation is most similar to that of $C$, in terms of cosine similarity. This baseline will be referred to as Similarity$_k$, with $k$ the number of selected neighbors.
We refer to our model as SECOND-WEA$_k$ or SECOND-BERT$_k$ (SEmantic categories with COnceptual NeighborhooD), depending on whether the word embedding averaging strategy is used or the method using BERT.
<<</Taxonomy>>>
<<</Experimental setting>>>
<<<Quantitative Results>>>
Our main results for the category induction task are summarized in Table TABREF24. In this table, we show results for different choices of the number of selected conceptual neighbors $k$, ranging from 1 to 5. As can be seen from the table, our approach substantially outperforms all baselines, with Multi being the most competitive baseline. Interestingly, for the Similarity baseline, the higher the number of neighbors, the more the performance approaches that of Multi. The relatively strong performance of Multi shows that using the siblings of a category in the BabelNet taxonomy is in general useful. However, as our results show, better results can be obtained by focusing on the predicted conceptual neighbors only. It is interesting to see that even selecting a single conceptual neighbor is already sufficient to substantially outperform the Gaussian model, although the best results are obtained for $k=4$. Comparing the WEA and BERT variants, it is notable that BERT is more successful at selecting the single best conceptual neighbor (reflected in an F1 score of 47.0 compared to 41.9). However, for $k \ge 2$, the results of the WEA and BERT are largely comparable.
<<</Quantitative Results>>>
<<<Qualitative Analysis>>>
To illustrate how conceptual neighborhood can improve classification results, Fig. FIGREF25 shows the two first principal components of the embeddings of the instances of three BabelNet categories: Songbook, Brochure and Guidebook. All three categories can be considered to be conceptual neighbors. Brochure and Guidebook are closely related categories, and we may expect there to exist borderline cases between them. This can be clearly seen in the figure, where some instances are located almost exactly on the boundary between the two categories. On the other hand, Songbook is slightly more separated in the space. Let us now consider the left-most data point from the Songbook test set, which is essentially an outlier, being more similar to instances of Guidebook than typical Songbook instances. When using a Gaussian model, this data point would not be recognised as a plausible instance. When incorporating the fact that Brochure and Guidebook are conceptual neighbors of Songbook, however, it is more likely to be classified correctly.
To illustrate the notion of conceptual neighborhood itself, Table TABREF27 displays some selected category pairs from the training set (i.e. the category pairs that were used to train the text classifier), which intuitively correspond to conceptual neighbors. The left column contains some selected examples of category pairs with a high $s_{AB}$ score of at least 0.9. As these examples illustrate, we found that a high $s_{AB}$ score was indeed often predictive of conceptual neighborhood. As the right column of this table illustrates, there are several category pairs with a lower $s_{AB}$ score of around 0.5 which intuitively still seem to correspond to conceptual neighbors. When looking at category pairs with even lower scores, however, conceptual neighborhood becomes rare. Moreover, while there are several pairs with high scores which are not actually conceptual neighbors (e.g. the pair Actor – Makup Artist), they tend to be categories which are still closely related. This means that the impact of incorrectly treating them as conceptual neighbors on the performance of our method is likely to be limited. On the other hand, when looking at category pairs with a very low confidence score we find many unrelated pairs, which we can expect to be more harmful when considered as conceptual neighbors, as the combined Gaussian will then cover a much larger part of the space. Some examples of such pairs include Primary school – Financial institution, Movie theatre – Housing estate, Corporate title – Pharaoh and Fraternity – Headquarters.
Finally, in Tables TABREF28 and TABREF29, we show examples of the top conceptual neighbors that were selected for some categories from the test set. Table TABREF28 shows examples of BabelNet categories for which the F1 score of our SECOND-WEA$_1$ classifier was rather low. As can be seen, the conceptual neighbors that were chosen in these cases are not suitable. For instance, Bachelor's degree is a near-synonym of Undergraduate degree, hence assuming them to be conceptual neighbors would clearly be detrimental. In contrast, when looking at the examples in Table TABREF29, where categories are shown with a higher F1 score, we find examples of conceptual neighbors that are intuitively much more meaningful.
<<</Qualitative Analysis>>>
<<</Experiments>>>
<<<Conclusions>>>
We have studied the role of conceptual neighborhood for modelling categories, focusing especially on categories with a relatively small number of instances, for which standard modelling approaches are challenging. To this end, we have first introduced a method for predicting conceptual neighborhood from text, by taking advantage of BabelNet to implement a distant supervision strategy. We then used the resulting classifier to identify the most likely conceptual neighbors of a given target category, and empirically showed that incorporating these conceptual neighbors leads to a better performance in a category induction task.
In terms of future work, it would be interesting to look at other types of lexical relations that can be predicted from text. One possible strategy would be to predict conceptual betweenness, where a category $B$ is said to be between $A$ and $C$ if $B$ has all the properties that $A$ and $C$ have in common BIBREF40 (e.g. we can think of wine as being conceptually between beer and rum). In particular, if $B$ is predicted to be conceptually between $A$ and $C$ then we would also expect the region modelling $B$ to be between the regions modelling $A$ and $C$.
Acknowledgments. Jose Camacho-Collados, Luis Espinosa-Anke and Steven Schockaert were funded by ERC Starting Grant 637277. Zied Bouraoui was supported by CNRS PEPS INS2I MODERN.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Conclusions, Abstract"
],
"type": "disordered_section"
}
|
1912.01679
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Deep Contextualized Acoustic Representations For Semi-Supervised Speech Recognition
<<<Abstract>>>
We propose a novel approach to semi-supervised automatic speech recognition (ASR). We first exploit a large amount of unlabeled audio data via representation learning, where we reconstruct a temporal slice of filterbank features from past and future context frames. The resulting deep contextualized acoustic representations (DeCoAR) are then used to train a CTC-based end-to-end ASR system using a smaller amount of labeled audio data. In our experiments, we show that systems trained on DeCoAR consistently outperform ones trained on conventional filterbank features, giving 42% and 19% relative improvement over the baseline on WSJ eval92 and LibriSpeech test-clean, respectively. Our approach can drastically reduce the amount of labeled data required; unsupervised training on LibriSpeech then supervision with 100 hours of labeled data achieves performance on par with training on all 960 hours directly.
<<</Abstract>>>
<<<Introduction>>>
Current state-of-the-art models for speech recognition require vast amounts of transcribed audio data to attain good performance. In particular, end-to-end ASR models are more demanding in the amount of training data required when compared to traditional hybrid models. While obtaining a large amount of labeled data requires substantial effort and resources, it is much less costly to obtain abundant unlabeled data.
For this reason, semi-supervised learning (SSL) is often used when training ASR systems. The most commonly-used SSL approach in ASR is self-training BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. In this approach, a smaller labeled set is used to train an initial seed model, which is applied to a larger amount of unlabeled data to generate hypotheses. The unlabeled data with the most reliable hypotheses are added to the training data for re-training. This process is repeated iteratively. However, self-training is sensitive to the quality of the hypotheses and requires careful calibration of the confidence measures. Other SSL approaches include: pre-training on a large amount of unlabeled data with restricted Boltzmann machines (RBMs) BIBREF5; entropy minimization BIBREF6, BIBREF7, BIBREF8, where the uncertainty of the unlabeled data is incorporated as part of the training objective; and graph-based approaches BIBREF9, where the manifold smoothness assumption is exploited. Recently, transfer learning from large-scale pre-trained language models (LMs) BIBREF10, BIBREF11, BIBREF12 has shown great success and achieved state-of-the-art performance in many NLP tasks. The core idea of these approaches is to learn efficient word representations by pre-training on massive amounts of unlabeled text via word completion. These representations can then be used for downstream tasks with labeled data.
Inspired by this, we propose an SSL framework that learns efficient, context-aware acoustic representations using a large amount of unlabeled data, and then applies these representations to ASR tasks using a limited amount of labeled data. In our implementation, we perform acoustic representation learning using forward and backward LSTMs and a training objective that minimizes the reconstruction error of a temporal slice of filterbank features given previous and future context frames. After pre-training, we fix these parameters and add output layers with connectionist temporal classification (CTC) loss for the ASR task.
The paper is organized as follows: in Section SECREF2, we give a brief overview of related work in acoustic representation learning and SSL. In Section SECREF3, we describe an implementation of our SSL framework with DeCoAR learning. We describe the experimental setup in Section SECREF4 and the results on WSJ and LibriSpeech in Section SECREF5, followed by our conclusions in Section SECREF6.
<<</Introduction>>>
<<<Related work>>>
While semi-supervised learning has been exploited in a plethora of works in hybrid ASR system, there are very few work done in the end-to-end counterparts BIBREF3, BIBREF13, BIBREF14. In BIBREF3, an intermediate representation of speech and text is learned via a shared encoder network. To train these representation, the encoder network was trained to optimize a combination of ASR loss, text-to-text autoencoder loss and inter-domain loss. The latter two loss functions did not require paired speech and text data. Learning efficient acoustic representation can be traced back to restricted Boltzmann machine BIBREF15, BIBREF16, BIBREF17, which allows pre-training on large amounts of unlabeled data before training the deep neural network acoustic models.
More recently, acoustic representation learning has drawn increasing attention BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23 in speech processing. For example, an autoregressive predictive coding model (APC) was proposed in BIBREF20 for unsupervised speech representation learning and was applied to phone classification and speaker verification. WaveNet auto-encoders BIBREF21 proposed contrastive predictive coding (CPC) to learn speech representations and was applied on unsupervised acoustic unit discovery task. Wav2vec BIBREF22 proposed a multi-layer convolutional neural network optimized via a noise contrastive binary classification and was applied to WSJ ASR tasks.
Unlike the speech representations described in BIBREF22, BIBREF20, our representations are optimized to use bi-directional contexts to auto-regressively reconstruct unseen frames. Thus, they are deep contextualized representations that are functions of the entire input sentence. More importantly, our work is a general semi-supervised training framework that can be applied to different systems and requires no architecture change.
<<</Related work>>>
<<<DEep COntextualized Acoustic Representations>>>
<<<Representation learning from unlabeled data>>>
Our approach is largely inspired by ELMo BIBREF10. In ELMo, given a sequence of $T$ tokens $(w_1,w_2,...,w_T)$, a forward language model (implemented with an LSTM) computes its probability using the chain rule decomposition:
Similarly, a backward language model computes the sequence probability by modeling the probability of token $w_t$ given its future context $w_{t+1},\cdots , w_T$ as follows:
ELMo is trained by maximizing the joint log-likelihood of both forward and backward language model probabilities:
where $\Theta _x$ is the parameter for the token representation layer, $\Theta _s$ is the parameter for the softmax layer, and $\overrightarrow{\Theta }_{\text{LSTM}}$, $\overleftarrow{\Theta }_{\text{LSTM}}$ are the parameters of forward and backward LSTM layers, respectively. As the word representations are learned with neural networks that use past and future information, they are referred to as deep contextualized word representations.
For speech processing, predicting a single frame $\mathbf {x}_t$ may be a trivial task, as it could be solved by exploiting the temporal smoothness of the signal. In the APC model BIBREF20, the authors propose predicting a frame $K$ steps ahead of the current one. Namely, the model aims to minimize the $\ell _1$ loss between an acoustic feature vector $\mathbf {x}$ at time $t+K$ and a reconstruction $\mathbf {y}$ predicted at time $t$: $\sum _{t=1}^{T-K} |\mathbf {x}_{t+K} - \mathbf {y}_t|$. They conjectured this would induce the model to learn more global structure rather than simply leveraging local information within the signal.
We propose combining the bidirectionality of ELMo and the reconstruction objective of APC to give deep contextualized acoustic representations (DeCoAR). We train the model to predict a slice of $K$ acoustic feature vectors, given past and future acoustic vectors. As depicted on the left side of Figure FIGREF1, a stack of forward and backward LSTMs are applied to the entire unlabeled input sequence $\mathbf {X} = (\mathbf {x}_1,\cdots ,\mathbf {x}_T)$. The network computes a hidden representation that encodes information from both previous and future frames (i.e. $\overrightarrow{\mathbf {z}}_t, \overleftarrow{\mathbf {z}}_t$) for each frame $\mathbf {x}_t$. Given a sequence of acoustic feature inputs $(\mathbf {x}_1, ..., \mathbf {x}_{T}) \in \mathbb {R}^d$, for each slice $(\mathbf {x}_t, \mathbf {x}_{t+1}, ..., \mathbf {x}_{t+K})$ starting at time step $t$, our objective is defined as follows:
where $[\overrightarrow{\mathbf {z}}_t; \overleftarrow{\mathbf {z}}_{t}] \in \mathbb {R}^{2h}$ are the concatenated forward and backward states from the last LSTM layer, and
is a position-dependent feed-forward network with 512 hidden dimensions. The final loss $\mathcal {L}$ is summed over all possible slices in the entire sequence:
Note this can be implemented efficiently as a layer which predicts these $(K+1)$ frames at each position $t$, all at once. We compare with the use of unidirectional LSTMs and various slice sizes in Section SECREF5.
<<</Representation learning from unlabeled data>>>
<<<End-to-end ASR training with labeled data>>>
After we have pre-trained the DeCoAR on unlabeled data, we freeze the parameters in the architecture. To train an end-to-end ASR system using labeled data, we remove the reconstruction layer and add two BLSTM layers with CTC loss BIBREF24, as illustrated on the right side of Figure FIGREF1. The DeCoAR vectors induced by the labeled data in the forward and backward layers are concatenated. We fine-tune the parameters of this ASR-specific new layer on the labeled data.
While we use LSTMs and CTC loss in our implementation, our SSL approach should work for other layer choices (e.g. TDNN, CNN, self-attention) and other downstream ASR models (e.g. hybrid, seq2seq, RNN transducers) as well.
<<</End-to-end ASR training with labeled data>>>
<<</DEep COntextualized Acoustic Representations>>>
<<<Experimental Setup>>>
<<<Data>>>
We conducted our experiments on the WSJ and LibriSpeech datasets, pre-training by using one of the two training sets as unlabeled data. To simulate the SSL setting in WSJ, we used 30%, 50% as well as 100% of labeled data for ASR training, consisting of 25 hours, 40 hours, and 81 hours, respectively. We used dev93 for validation and eval92 and evaluation. For LibriSpeech, the amount of training data used varied from 100 hours to the entire 960 hours. We used dev-clean for validation and test-clean, test-other for evaluation.
<<</Data>>>
<<<ASR systems>>>
Our experiments consisted of three different setups: 1) a fully-supervised system using all labeled data; 2) an SSL system using wav2vec features; 3) an SSL system using our proposed DeCoAR features. All models used were based on deep BLSTMs with the CTC loss criterion.
In the supervised ASR setup, we used conventional log-mel filterbank features, which were extracted with a 25ms sliding window at a 10ms frame rate. The features were normalized via mean subtraction and variance normalization on a per-speaker basis. The model had 6 BLSTM layers, with 512 cells in each direction. We found that increasing the number of cells to a larger number did not further improve the performance and thus used it as our best supervised ASR baseline. The output CTC labels were 71 phonemes plus one blank symbol.
In the SSL ASR setup, we pre-trained a 4-layer BLSTM (1024 cells per sub-layer) to learn DeCoAR features according to the loss defined in Equation DISPLAY_FORM4 and use a slice size of 18. We optimized the network with SGD and use a Noam learning rate schedule, where we started with a learning rate of 0.001, gradually warm up for 500 updates, and then perform inverse square-root decay. We grouped the input sequences by length with a batch size of 64, and trained the models on 8 GPUs. After the representation network was trained, we froze the parameters, and added a projection layer, followed by 2-layer BLSTM with CTC loss on top it. We fed the labeled data to the network. For comparison, we obtained 512-dimensional wav2vec representations BIBREF22 from the wav2vec-large model. Their model was pre-trained on 960-hour LibriSpeech data with constrastive loss and had 12 convolutional layers with skip connections.
For evaluation purposes, we applied WFST-based decoding using EESEN BIBREF25. We composed the CTC labels, lexicons and language models (unpruned trigram LM for WSJ, 4-gram for LibriSpeech) into a decoding graph. The acoustic model score was set to $0.8$ and $1.0$ for WSJ and LibriSpeech, respectively, and the blank symbol prior scale was set to $0.3$ for both tasks. We report the performance in word error rate (WER).
<<</ASR systems>>>
<<</Experimental Setup>>>
<<<Results>>>
<<<Semi-supervised WSJ results>>>
Table TABREF14 shows our results on semi-supervised WSJ. We demonstrate that DeCoAR feature outperforms filterbank and wav2vec features, with a relative improvement of 42% and 20%, respectively. The lower part of the table shows that with smaller amounts of labeled data, the DeCoAR features are significantly better than the filterbank features: Compared to the system trained on 100% labeled data with filterbank features, we achieve comparable results on eval92 using 30% of the labeled data and better performance on eval92 using 50% of the labeled data.
<<</Semi-supervised WSJ results>>>
<<<Semi-supervised LibriSpeech results>>>
Table TABREF7 shows the results on semi-supervised LibriSpeech. Both our representations and wav2vecBIBREF22 are trained on 960h LibriSpeech data. We conduct our semi-supervised experiments using 100h (train-clean-100), 360h (train-clean-360), 460h, and 960h of training data. Our approach outperforms both the baseline and wav2vec model in each SSL scenario. One notable observation is that using only 100 hours of transcribed data achieves very similar performance to the system trained on the full 960-hour data with filterbank features. On the more challenging test-other dataset, we also achieve performance on par with the filterbank baseline using a 360h subset. Furthermore, training with with our DeCoAR features approach improves the baseline even when using the exact same training data (960h). Note that while BIBREF26 introduced SpecAugment to significantly improve LibriSpeech performance via data augmentation, and BIBREF27 achieved state-of-the-art results using both hybrid and end-to-end models, our approach focuses on the SSL case with less labeled training data via our DeCoAR features.
<<</Semi-supervised LibriSpeech results>>>
<<<Ablation Study and Analysis>>>
<<<Context window size>>>
We study the effect of the context window size during pre-training. Table TABREF20 shows that masking and predicting a larger slice of frames can actually degrade performance, while increasing training time. A similar effect was found in SpanBERT BIBREF28, another deep contextual word representation which found that masking a mean span of 3.8 consecutive words was ideal for their word reconstruction objective.
<<</Context window size>>>
<<<Unidirectional versus bidirectional context>>>
Next, we study the importance of bidirectional context by training a unidirectional LSTM, which corresponds to only using $\overrightarrow{\mathbf {z}}_t$ to predict $\mathbf {x}_t, \cdots , \mathbf {x}_{t+K}$. Table TABREF22 shows that this unidirectional model achieves comparable performance to the wav2vec model BIBREF22, suggesting that bidirectionality is the largest contributor to DeCoAR's improved performance.
<<</Unidirectional versus bidirectional context>>>
<<<DeCoAR as denoiser>>>
Since our model is trained by predicting masked frames, DeCoAR has the side effect of learning decoder feed-forward networks $\text{FFN}_i$ which reconstruct the $(t+i)$-th filterbank frame from contexts $\overrightarrow{\mathbf {z}}_t$ and $\overleftarrow{\mathbf {z}}_{t+K}$. In this section, we consider the spectrogram reconstructed by taking the output of $\text{FFN}_i$ at all times $t$.
The qualitative result is depicted in Figure FIGREF15 where the slice size is 18. We see that when $i=0$ (i.e., when reconstructing the $t$-th frame from $[\overrightarrow{\mathbf {z}}_t; \overleftarrow{\mathbf {z}}_{t+K}]$), the reconstruction is almost perfect. However, as soon as one predicts unseen frames $i=1, 4, 8$ (of 16), the reconstruction becomes more simplistic, but not by much. Background energy in the silent frames 510-550 is zeroed out. By $i=8$ artifacts begin to occur, such as an erroneous sharp band of energy being predicted around frame 555. This behavior is compatible with recent NLP works that interpret contextual word representations as denoising autoencoders BIBREF12.
The surprising ability of DeCoAR to broadly reconstruct a frame $\overrightarrow{\mathbf {x}}_{t+{K/2}}$ in the middle of a missing 16-frame slice suggests that its representations $[\overrightarrow{\mathbf {z}}_t; \overleftarrow{\mathbf {z}}_{t+K}]$ capture longer-term phonetic structure during unsupervised pre-training, as with APC BIBREF20. This motivates its success in the semi-supervised ASR task with only two additional layers, as it suggests DeCoAR learns phonetic representations similar to those likely learned by the first 4 layers of a corresponding end-to-end ASR model.
<<</DeCoAR as denoiser>>>
<<</Ablation Study and Analysis>>>
<<</Results>>>
<<<Conclusion>>>
In this paper, we introduce a novel semi-supervised learning approach for automatic speech recognition. We first propose a novel objective for a deep bidirectional LSTM network, where large amounts of unlabeled data are used to learn deep contextualized acoustic representations (DeCoAR). These DeCoAR features are used as the representations of labeled data to train a CTC-based end-to-end ASR model. In our experiments, we show a 42% relative improvement on WSJ compared to a baseline trained on log-mel filterbank features. On LibriSpeech, we achieve similar performance to training on 960 hours of labeled by pretraining then using only 100 hours of labeled data. While we use BLSTM-CTC as our ASR model, our approach can be applied to other end-to-end ASR models.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Introduction, Results"
],
"type": "disordered_section"
}
|
2004.03061
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Information-Theoretic Probing for Linguistic Structure
<<<Abstract>>>
The success of neural networks on a diverse set of NLP tasks has led researchers to question how much do these networks actually know about natural language. Probes are a natural way of assessing this. When probing, a researcher chooses a linguistic task and trains a supervised model to predict annotation in that linguistic task from the network's learned representations. If the probe does well, the researcher may conclude that the representations encode knowledge related to the task. A commonly held belief is that using simpler models as probes is better; the logic is that such models will identify linguistic structure, but not learn the task itself. We propose an information-theoretic formalization of probing as estimating mutual information that contradicts this received wisdom: one should always select the highest performing probe one can, even if it is more complex, since it will result in a tighter estimate. The empirical portion of our paper focuses on obtaining tight estimates for how much information BERT knows about parts of speech in a set of five typologically diverse languages that are often underrepresented in parsing research, plus English, totaling six languages. We find BERT accounts for only at most 5% more information than traditional, type-based word embeddings.
<<</Abstract>>>
<<<Introduction>>>
Neural networks are the backbone of modern state-of-the-art Natural Language Processing (NLP) systems. One inherent by-product of training a neural network is the production of real-valued representations. Many speculate that these representations encode a continuous analogue of discrete linguistic properties, e.g., part-of-speech tags, due to the networks' impressive performance on many NLP tasks BIBREF0. As a result of this speculation, one common thread of research focuses on the construction of probes, i.e., supervised models that are trained to extract the linguistic properties directly BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. A syntactic probe, then, is a model for extracting syntactic properties, such as part-of-speech, from the representations BIBREF6.
In this work, we question what the goal of probing for linguistic properties ought to be. Informally, probing is often described as an attempt to discern how much information representations encode about a specific linguistic property. We make this statement more formal: We assert that the goal of probing ought to be estimating the mutual information BIBREF7 between a representation-valued random variable and a linguistic property-valued random variable. This formulation gives probing a clean, information-theoretic foundation, and allows us to consider what “probing” actually means.
Our analysis also provides insight into how to choose a probe family: We show that choosing the highest-performing probe, independent of its complexity, is optimal for achieving the best estimate of mutual information (MI). This contradicts the received wisdom that one should always select simple probes over more complex ones BIBREF8, BIBREF9, BIBREF10. In this context, we also discuss the recent work of hewitt-liang-2019-designing who propose selectivity as a criterion for choosing families of probes. hewitt-liang-2019-designing define selectivity as the performance difference between a probe on the target task and a control task, writing “[t]he selectivity of a probe puts linguistic task accuracy in context with the probe's capacity to memorize from word types.” They further ponder: “when a probe achieves high accuracy on a linguistic task using a representation, can we conclude that the representation encodes linguistic structure, or has the probe just learned the task?” Information-theoretically, there is no difference between learning the task and probing for linguistic structure, as we will show; thus, it follows that one should always employ the best possible probe for the task without resorting to artificial constraints.
In support of our discussion, we empirically analyze word-level part-of-speech labeling, a common syntactic probing task BIBREF6, BIBREF11, within our framework. Working on a typologically diverse set of languages (Basque, Czech, English, Finnish, Tamil, and Turkish), we show that the representations from BERT, a common contextualized embedder, only account for at most $5\%$ more of the part-of-speech tag entropy than a control. These modest improvements suggest that most of the information needed to tag part-of-speech well is encoded at the lexical level, and does not require the sentential context of the word. Put more simply, words are not very ambiguous with respect to part of speech, a result known to practitioners of NLP BIBREF12. We interpret this to mean that part-of-speech labeling is not a very informative probing task.
We also remark that formulating probing information-theoretically gives us a simple, but stunning result: contextual word embeddings, e.g., BERT BIBREF13 and ELMo BIBREF14, contain the same amount of information about the linguistic property of interest as the original sentence. This follows naturally from the data-processing inequality under a very mild assumption. What this suggests is that, in a certain sense, probing for linguistic properties in representations may not be a well grounded enterprise at all.
<<</Introduction>>>
<<<Word-Level Syntactic Probes for Contextual Embeddings>>>
Following hewitt-liang-2019-designing, we consider probes that examine syntactic knowledge in contextualized embeddings. These probes only consider a single token's embedding and try to perform the task using only that information. Specifically, in this work, we consider part-of-speech (POS) labeling: determining a word's part of speech in a given sentence. For example, we wish to determine whether the word love is a noun or a verb. This task requires the sentential context for success. As an example, consider the utterance “love is blind” where, only with the context, is it clear that love is a noun. Thus, to do well on this task, the contextualized embeddings need to encode enough about the surrounding context to correctly guess the POS.
<<<Notation>>>
Let $S$ be a random variable ranging over all possible sequences of words. For the sake of this paper, we assume the vocabulary $\mathcal {V}$ is finite and, thus, the values $S$ can take are in $\mathcal {V}^*$. We write $\mathbf {s}\in S$ as $\mathbf {s}= w_1 \cdots w_{|\mathbf {s}|}$ for a specific sentence, where each $w_i \in \mathcal {V}$ is a specific word in the sentence and the position $i \in \mathbb {N}^{+}$. We also define the random variable $W$ that ranges over the vocabulary $\mathcal {V}$. We define both a sentence-level random variable $S$ and a word-level random variable $W$ since each will be useful in different contexts during our exposition.
Next, let $T$ be a random variable whose possible values are the analyses $t$ that we want to consider for word $w_i$ in its sentential context, $\mathbf {s}= w_1 \cdots w_i \cdots w_{|\mathbf {s}|}$. In this work, we will focus on predicting the part-of-speech tag of the $i^\text{th}$ word $w_i$. We denote the set of values $T$ can take as the set $\mathcal {T}$. Finally, let $R$ be a representation-valued random variable for the $i^\text{th}$ word $w_i$ in a sentence derived from the entire sentence $\mathbf {s}$. We write $\mathbf {r}\in \mathbb {R}^d$ for a value of $R$. While any given value $\mathbf {r}$ is a continuous vector, there are only a countable number of values $R$ can take. To see this, note there are only a countable number of sentences in $\mathcal {V}^*$.
Next, we assume there exists a true distribution $p(t, \mathbf {s}, i)$ over analyses $t$ (elements of $\mathcal {T}$), sentences $\mathbf {s}$ (elements of $\mathcal {V}^*$), and positions $i$ (elements of $\mathbb {N}^{+}$). Note that the conditional distribution $p(t \mid \mathbf {s}, i)$ gives us the true distribution over analyses $t$ for the $i^{\text{th}}$ word in the sentence $\mathbf {s}$. We will augment this distribution such that $p$ is additionally a distribution over $\mathbf {r}$, i.e.,
where we define the augmentation as a Dirac's delta function
Since contextual embeddings are a deterministic function of a sentence $\mathbf {s}$, the augmented distribution in eq:true has no more randomness than the original—its entropy is the same. We assume the values of the random variables defined above are distributed according to this (unknown) $p$. While we do not have access to $p$, we assume the data in our corpus were drawn according to it. Note that $W$—the random variable over possible words—is distributed according to the marginal distribution
where we define the deterministic distribution
<<</Notation>>>
<<<Probing as Mutual Information>>>
The task of supervised probing is an attempt to ascertain how much information a specific representation $\mathbf {r}$ tells us about the value of $t$. This is naturally expressed as the mutual information, a quantity from information theory:
where we define the entropy, which is constant with respect to the representations, as
and where we define the conditional entropy as
the point-wise conditional entropy inside the sum is defined as
Again, we will not know any of the distributions required to compute these quantities; the distributions in the formulae are marginals and conditionals of the true distribution discussed in eq:true.
<<</Probing as Mutual Information>>>
<<<Bounding Mutual Information>>>
The desired conditional entropy, $\mathrm {H}(T \mid R)$ is not readily available, but with a model $q_{{\theta }}(\mathbf {t}\mid \mathbf {r})$ in hand, we can upper-bound it by measuring their empirical cross entropy
where $\mathrm {H}_{q_{{\theta }}}(T \mid R)$ is the cross-entropy we obtain by using $q_{{\theta }}$ to get this estimate. Since the KL divergence is always positive, we may lower-bound the desired mutual information
This bound gets tighter, the more similar (in the sense of the KL divergence) $q_{{\theta }}(\cdot \mid \mathbf {r})$ is to the true distribution $p(\cdot \mid \mathbf {r})$.
<<<Bigger Probes are Better.>>>
If we accept mutual information as a natural measure for how much representations encode a target linguistic task (§SECREF6), then the best estimate of that mutual information is the one where the probe $q_{{\theta }}(t \mid \mathbf {r})$ is best at the target task. In other words, we want the best probe $q_{{\theta }}(t \mid \mathbf {r})$ such that we get the tightest bound to the actual distribution $p(t\mid \mathbf {r})$. This paints the question posed by hewitt-liang-2019-designing, who write “when a probe achieves high accuracy on a linguistic task using a representation, can we conclude that the representation encodes linguistic structure, or has the probe just learned the task?” as a false dichotomy. From an information-theoretic view, we will always prefer the probe that does better at the target task, since there is no difference between learning a task and the representations encoding the linguistic structure.
<<</Bigger Probes are Better.>>>
<<</Bounding Mutual Information>>>
<<</Word-Level Syntactic Probes for Contextual Embeddings>>>
<<<Control Functions>>>
To place the performance of a probe in perspective, hewitt-liang-2019-designing develop the notion of a control task. Inspired by this, we develop an analogue we term control functions, which are functions of the representation-valued random variable $R$. Similar to hewitt-liang-2019-designing's control tasks, the goal of a control function $\mathbf {c}(\cdot )$ is to place the mutual information $\mathrm {I}(T; R)$ in the context of a baseline that the control function encodes. Control functions have their root in the data-processing inequality BIBREF7, which states that, for any function $\mathbf {c}(\cdot )$, we have
In other words, information can only be lost by processing data. A common adage associated with this inequality is “garbage in, garbage out.”
<<<Type-Level Control Functions>>>
We will focus on type-level control functions in this paper; these functions have the effect of decontextualizing the embeddings. Such functions allow us to inquire how much the contextual aspect of the contextual embeddings help the probe perform the target task. To show that we may map from contextual embeddings to the identity of the word type, we need the following assumption about the embeddings.
Assumption 1 Every contextualized embedding is unique, i.e., for any pair of sentences $\mathbf {s}, \mathbf {s}^{\prime } \in \mathcal {V}^*$, we have $(\mathbf {s}\ne \mathbf {s}^{\prime }) \mid \mid (i \ne j) \Rightarrow \textsc {bert} (\mathbf {s})_i \ne \textsc {bert} (\mathbf {s}^{\prime })_j$ for all $i \in \lbrace 1, \ldots |\mathbf {s}|\rbrace $ and $j \in \lbrace 1, \ldots , |\mathbf {s}^{\prime }|\rbrace $.
We note that ass:one is mild. Contextualized word embeddings map words (in their context) to $\mathbb {R}^d$, which is an uncountably infinite space. However, there are only a countable number of sentences, which implies only a countable number of sequences of real vectors in $\mathbb {R}^d$ that a contextualized embedder may produce. The event that any two embeddings would be the same across two distinct sentences is infinitesimally small. ass:one yields the following corollary.
Corollary 1 There exists a function $\emph {\texttt {id} } : \mathbb {R}^d \rightarrow V$ that maps a contextualized embedding to its word type. The function $\emph {\texttt {id} }$ is not a bijection since multiple embeddings will map to the same type.
Using cor:one, we can show that any non-contextualized word embedding will contain no more information than a contextualized word embedding. More formally, we do this by constructing a look-up function $\mathbf {e}: V \rightarrow \mathbb {R}^d$ that maps a word to a word embedding. This embedding may be one-hot, randomly generated ahead of time, or the output of a data-driven embedding method, e.g. fastText BIBREF15. We can then construct a control function as the composition of the look-up function $\mathbf {e}$ and the id function $\texttt {id} $. Using the data-processing inequality, we can prove that in a word-level prediction task, any non-contextual (type level) word-embedding will contain no more information than a contextualized (token level) one, such as BERT and ELMo. Specifically, we have
This result is intuitive and, perhaps, trivial—context matters information-theoretically. However, it gives us a principled foundation by which to measure the effectiveness of probes as we will show in sec:gain.
<<</Type-Level Control Functions>>>
<<<How Much Information Did We Gain?>>>
We will now quantify how much a contextualized word embedding knows about a task with respect to a specific control function $\mathbf {c}(\cdot )$. We term how much more information the contextualized embeddings have about a task than a control variable the gain, which we define as
The gain function will be our method for measuring how much more information contextualized representations have over a controlled baseline, encoded as the function $\mathbf {c}$. We will empirically estimate this value in sec:experiments.
Interestingly enough, the gain has a straightforward interpretation.
Proposition 1 The gain function is equal to the following conditional mutual information
The jump from the first to the second equality follows since $R$ encodes all the information about $T$ provided by $\mathbf {c}(R)$ by construction. prop:interpretation gives us a clear understanding of the quantity we wish to estimate: It is how much information about a task is encoded in the representations, given some control knowledge. If properly designed, this control transformation will remove information from the probed representations.
<<</How Much Information Did We Gain?>>>
<<<Approximating the Gain>>>
The gain, as defined in eq:gain, is intractable to compute. In this section we derive a pair of variational bounds on $\mathcal {G}(T, R, \mathbf {e})$—one upper and one lower. To approximate the gain, we will simultaneously minimize an upper and a lower-bound on eq:gain. We begin by approximating the gain in the following manner
these cross-entropies can be empirically estimated. We will assume access to a corpus $\lbrace (t_i, \mathbf {r}_i)\rbrace _{i=1}^N$ that is human-annotated for the target linguistic property; we further assume that these are samples $(t_i, \mathbf {r}_i) \sim p(\cdot , \cdot )$ from the true distribution. This yields a second approximation that is tractable:
This approximation is exact in the limit $N \rightarrow \infty $ by the law of large numbers.
We note the approximation given in eq:approx may be either positive or negative and its estimation error follows from eq:entestimate
where we abuse the KL notation to simplify the equation. This is an undesired behavior since we know the gain itself is non-negative, by the data-processing inequality, but we have yet to devise a remedy.
We justify the approximation in eq:approx with a pair of variational bounds. The following two corollaries are a result of thm:variationalbounds in appendix:a.
Corollary 2 We have the following upper-bound on the gain
Corollary 3 We have the following lower-bound on the gain
The conjunction of cor:upper and cor:lower suggest a simple procedure for finding a good approximation: We choose $q_{{\theta }1}(\cdot \mid r)$ and $q_{{\theta }2}(\cdot \mid r)$ so as to minimize eq:upper and maximize eq:lower, respectively. These distributions contain no overlapping parameters, by construction, so these two optimization routines may be performed independently. We will optimize both with a gradient-based procedure, discussed in sec:experiments.
<<</Approximating the Gain>>>
<<</Control Functions>>>
<<<Understanding Probing Information-Theoretically>>>
In sec:control-functions we developed an information-theoretic framework for thinking about probing contextual word embeddings for linguistic structure. However, we now cast doubt on whether probing makes sense as a scientific endeavour. We prove in sec:context that contextualized word embeddings, by construction, contain no more information about a word-level syntactic task than the original sentence itself. Nevertheless, we do find a meaningful scientific interpretation of control functions. We expound upon this in sec:control-functions-meaning, arguing that control functions are useful, not for understanding representations, but rather for understanding the influence of sentential context on word-level syntactic tasks, e.g., labeling words with their part of speech.
<<<You Know Nothing, BERT>>>
To start, we note the following corollary
Corollary 4 It directly follows from ass:one that $\textsc {bert} $ is a bijection between sentences $\mathbf {s}$ and sequences of embeddings $\langle \mathbf {r}_1, \ldots , \mathbf {r}_{|\mathbf {s}|} \rangle $. As $\textsc {bert} $ is a bijection, it has an inverse, which we will denote as $\textsc {bert}^{-1} $.
Theorem 1 The function $\textsc {bert} (S)$ cannot provide more information about $T$ than the sentence $S$ itself.
This implies $\mathrm {I}(T ; S) = \mathrm {I}(T; \textsc {bert} (S))$. We remark this is not a BERT-specific result—it rests on the fact that the data-processing inequality is tight for bijections. While thm:bert is a straightforward application of the data-processing inequality, it has deeper ramifications for probing. It means that if we search for syntax in the contextualized word embeddings of a sentence, we should not expect to find any more syntax than is present in the original sentence. In a sense, thm:bert is a cynical statement: the endeavour of finding syntax in contextualized embeddings sentences is nonsensical. This is because, under ass:one, we know the answer a priori—the contextualized word embeddings of a sentence contain exactly the same amount of information about syntax as does the sentence itself.
<<</You Know Nothing, BERT>>>
<<<What Do Control Functions Mean?>>>
Information-theoretically, the interpretation of control functions is also interesting. As previously noted, our interpretation of control functions in this work does not provide information about the representations themselves. Actually, the same reasoning used in cor:one could be used to devise a function $\texttt {id} _s(\mathbf {r})$ which led from a single representation back to the whole sentence. For a type-level control function $\mathbf {c}$, by the data-processing inequality, we have that $\mathrm {I}(T; W) \ge \mathrm {I}(T; \mathbf {c}(R))$. Consequently, we can get an upper-bound on how much information we can get out of a decontextualized representation. If we assume we have perfect probes, then we get that the true gain function is $\mathrm {I}(T; S) - \mathrm {I}(T; W) = \mathrm {I}(T; S \mid W)$. This quantity is interpreted as the amount of knowledge we gain about the word-level task $T$ by knowing $S$ (i.e., the sentence) in addition to $W$ (i.e., the word). Therefore, a perfect probe would provide insights about language and not about the actual representations, which are no more than a means to an end.
<<</What Do Control Functions Mean?>>>
<<<Discussion: Ease of Extraction>>>
We do acknowledge another interpretation of the work of hewitt-liang-2019-designing inter alia; BERT makes the syntactic information present in an ordered sequence of words more easily extractable. However, ease of extraction is not a trivial notion to formalize, and indeed, we know of no attempt to do so; it is certainly more complex to determine than the number of layers in a multi-layer perceptron (MLP). Indeed, a MLP with a single hidden layer can represent any function over the unit cube, with the caveat that we may need a very large number of hidden units BIBREF16.
Although for perfect probes the above results should hold, in practice $\texttt {id} (\cdot )$ and $\mathbf {c}(\cdot )$ may be hard to approximate. Furthermore, if these functions were to be learned, they might require an unreasonably large dataset. A random embedding control function, for example, would require an infinitely large dataset to be learned—or at least one that contained all words in the vocabulary $V$. “Better” representations should make their respective probes more easily learnable—and consequently their encoded information more accessible.
We suggest that future work on probing should focus on operationalizing ease of extraction more rigorously—even though we do not attempt this ourselves. The advantage of simple probes is that they may reveal something about the structure of the encoded information—i.e., is it structured in such a way that it can be easily taken advantage of by downstream consumers of the contextualized embeddings? We suspect that many researchers who are interested in less complex probes have implicitly had this in mind.
<<</Discussion: Ease of Extraction>>>
<<</Understanding Probing Information-Theoretically>>>
<<<A Critique of Control Tasks>>>
While this paper builds on the work of hewitt-liang-2019-designing, and we agree with them that we should have control tasks when probing for linguistic properties, we disagree with parts of the methodology for the control task construction. We present these disagreements here.
<<<Structure and Randomness>>>
hewitt-liang-2019-designing introduce control tasks to evaluate the effectiveness of probes. We draw inspiration from this technique as evidenced by our introduction of control functions. However, we take issue with the suggestion that controls should have structure and randomness, to use the terminology from hewitt-liang-2019-designing. They define structure as “the output for a word token is a deterministic function of the word type.” This means that they are stripping the language of ambiguity with respect to the target task. In the case of part-of-speech labeling, love would either be a noun or a verb in a control task, never both: this is a problem. The second feature of control tasks is randomness, i.e., “the output for each word type is sampled independently at random.” In conjunction, structure and randomness may yield a relatively trivial task that does not look at all like natural language.
What is more, there is a closed-form solution for an optimal, retrieval-based “probe” that has zero parameters: If a word type appears in the training set, return the label with which it was annotated there, otherwise return the most frequently occurring label across all words in the training set. This probe will achieve an accuracy that is 1 minus the out-of-vocabulary rate (the number of tokens in the test set that correspond to novel types divided by the number of tokens) times the percentage of tags in the test set that do not correspond to the most frequent tag (the error rate of the guess-the-most-frequent-tag classifier). In short, the best model for a control task is a pure memorizer that guesses the most frequent tag for out-of-vocabulary words.
<<</Structure and Randomness>>>
<<<What's Wrong with Memorization?>>>
hewitt-liang-2019-designing propose that probes should be optimised to maximise accuracy and selectivity. Recall selectivity is given by the distance between the accuracy on the original task and the accuracy on the control task using the same architecture. Given their characterization of control tasks, maximising selectivity leads to a selection of a model that is bad at memorization. But why should we punish memorization? Much of linguistic competence is about generalization, however memorization also plays a key role BIBREF17, BIBREF18, BIBREF19, with word learning BIBREF20 being an obvious example. Indeed, maximizing selectivity as a criterion for creating probes seems to artificially disfavor this property.
<<</What's Wrong with Memorization?>>>
<<<What Low-Selectivity Means>>>
hewitt-liang-2019-designing acknowledge that for the more complex task of dependency edge prediction, a MLP probe is more accurate and, therefore, preferable despite its low selectivity. However, they offer two counter-examples where the less selective neural probe exhibits drawbacks when compared to its more selective, linear counterpart. We believe both examples are a symptom of using a simple probe rather than of selectivity being a useful metric for probe selection. First, [§3.6]hewitt-liang-2019-designing point out that, in their experiments, the MLP-1 model frequently mislabels the word with suffix -s as NNPS on the POS labeling task. They present this finding as a possible example of a less selective probe being less faithful in representing what linguistic information has the model learned. Our analysis leads us to believe that, on contrary, this shows that one should be using the best possible probe to minimize the chance of misrepresentation. Since more complex probes achieve higher accuracy on the task, as evidence by the findings of hewitt-liang-2019-designing, we believe that the overall trend of misrepresentation is higher for the probes with higher selectivity. The same applies for the second example discussed in section [§4.2]hewitt-liang-2019-designing where a less selective probe appears to be less faithful. The authors show that the representations on ELMo's second layer fail to outperform its word type ones (layer zero) on the POS labeling task when using the MLP-1 probe. While they argue this is evidence for selectivity being a useful metric in choosing appropriate probes, we argue that this demonstrates yet again that one needs to use a more complex probe to minimize the chances of misrepresenting what the model has learned. The fact that the linear probe shows a difference only demonstrates that the information is perhaps more accessible with ELMo, not that it is not present; see sec:ease-extract.
<<</What Low-Selectivity Means>>>
<<</A Critique of Control Tasks>>>
<<<Experiments>>>
We consider the task of POS labeling and use the universal POS tag information BIBREF21 from the Universal Dependencies 2.4 BIBREF22. We probe the multilingual release of BERT on six typologically diverse languages: Basque, Czech, English, Finnish, Tamil, and Turkish; and we compute the contextual representations of each sentence by feeding it into BERT and averaging the output word piece representations for each word, as tokenized in the treebank.
<<<Probe Architecture>>>
As expounded upon above, our purpose is to achieve the best bound on mutual information we can. To this end, we employ a deep MLP as our probe. We define the probe as
an $m$-layer neural network with the non-linearity $\sigma (\cdot ) = \mathrm {ReLU}(\cdot )$. The initial projection matrix is $W^{(1)} \in \mathbb {R}^{r_1 \times d}$ and the final projection matrix is $W^{(m)} \in \mathbb {R}^{|\mathcal {T}| \times r_{m-1}}$, where $r_i=\frac{r}{2^{i-1}}$. The remaining matrices are $W^{(i)} \in \mathbb {R}^{r_i \times r_{i-1}}$, so we half the number of hidden states in each layer. We optimize over the hyperparameters—number of layers, hidden size, one-hot embedding size, and dropout—by using random search. For each estimate, we train 50 models and choose the one with the best validation cross-entropy. The cross-entropy in the test set is then used as our entropy estimate.
<<</Probe Architecture>>>
<<<Results>>>
We know $\textsc {bert} $ can generate text in many languages, here we assess how much does it actually know about syntax in those languages. And how much more does it know than simple type-level baselines. tab:results-full presents this results, showing how much information $\textsc {bert} $, fastText and onehot embeddings encode about POS tagging. We see that—in all analysed languages—type level embeddings can already capture most of the uncertainty in POS tagging. We also see that BERT only shares a small amount of extra information with the task, having small (or even negative) gains in all languages.
$\textsc {bert} $ presents negative gains in some of the analysed languages. Although this may seem to contradict the information processing inequality, it is actually caused by the difficulty of approximating $\texttt {id} $ and $\mathbf {c}(\cdot )$ with a finite training set—causing $\mathrm {KL}_{q_{{\theta }1}}(T \mid R)$ to be larger than $\mathrm {KL}_{q_{{\theta }2}}(T \mid \mathbf {c}(R))$. We believe this highlights the need to formalize ease of extraction, as discussed in sec:ease-extract.
Finally, when put into perspective, multilingual $\textsc {bert} $'s representations do not seem to encode much more information about syntax than a trivial baseline. $\textsc {bert} $ only improves upon fastText in three of the six analysed languages—and even in those, it encodes at most (in English) $5\%$ additional information.
<<</Results>>>
<<</Experiments>>>
<<<Conclusion>>>
We proposed an information-theoretic formulation of probing: we define probing as the task of estimating conditional mutual information. We introduce control functions, which allows us to put the amount of information encoded in contextual representations in the context of knowledge judged to be trivial. We further explored this formalization and showed that, given perfect probes, probing can only yield insights into the language itself and tells us nothing about the representations under investigation. Keeping this in mind, we suggested a change of focus—instead of focusing on probe size or information, we should look at ease of extraction going forward.
On another note, we apply our formalization to evaluate multilingual $\textsc {bert} $'s syntax knowledge on a set of six typologically diverse languages. Although it does encode a large amount of information about syntax (more than $81\%$ in all languages), it only encodes at most $5\%$ more information than some trivial baseline knowledge (a type-level representation). This indicates that the task of POS labeling (word-level POS tagging) is not an ideal task for contemplating the syntactic understanding of contextual word embeddings.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Abstract, Word-Level Syntactic Probes for Contextual Embeddings"
],
"type": "disordered_section"
}
|
2004.03061
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Information-Theoretic Probing for Linguistic Structure
<<<Abstract>>>
The success of neural networks on a diverse set of NLP tasks has led researchers to question how much do these networks actually know about natural language. Probes are a natural way of assessing this. When probing, a researcher chooses a linguistic task and trains a supervised model to predict annotation in that linguistic task from the network's learned representations. If the probe does well, the researcher may conclude that the representations encode knowledge related to the task. A commonly held belief is that using simpler models as probes is better; the logic is that such models will identify linguistic structure, but not learn the task itself. We propose an information-theoretic formalization of probing as estimating mutual information that contradicts this received wisdom: one should always select the highest performing probe one can, even if it is more complex, since it will result in a tighter estimate. The empirical portion of our paper focuses on obtaining tight estimates for how much information BERT knows about parts of speech in a set of five typologically diverse languages that are often underrepresented in parsing research, plus English, totaling six languages. We find BERT accounts for only at most 5% more information than traditional, type-based word embeddings.
<<</Abstract>>>
<<<Introduction>>>
Neural networks are the backbone of modern state-of-the-art Natural Language Processing (NLP) systems. One inherent by-product of training a neural network is the production of real-valued representations. Many speculate that these representations encode a continuous analogue of discrete linguistic properties, e.g., part-of-speech tags, due to the networks' impressive performance on many NLP tasks BIBREF0. As a result of this speculation, one common thread of research focuses on the construction of probes, i.e., supervised models that are trained to extract the linguistic properties directly BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. A syntactic probe, then, is a model for extracting syntactic properties, such as part-of-speech, from the representations BIBREF6.
In this work, we question what the goal of probing for linguistic properties ought to be. Informally, probing is often described as an attempt to discern how much information representations encode about a specific linguistic property. We make this statement more formal: We assert that the goal of probing ought to be estimating the mutual information BIBREF7 between a representation-valued random variable and a linguistic property-valued random variable. This formulation gives probing a clean, information-theoretic foundation, and allows us to consider what “probing” actually means.
Our analysis also provides insight into how to choose a probe family: We show that choosing the highest-performing probe, independent of its complexity, is optimal for achieving the best estimate of mutual information (MI). This contradicts the received wisdom that one should always select simple probes over more complex ones BIBREF8, BIBREF9, BIBREF10. In this context, we also discuss the recent work of hewitt-liang-2019-designing who propose selectivity as a criterion for choosing families of probes. hewitt-liang-2019-designing define selectivity as the performance difference between a probe on the target task and a control task, writing “[t]he selectivity of a probe puts linguistic task accuracy in context with the probe's capacity to memorize from word types.” They further ponder: “when a probe achieves high accuracy on a linguistic task using a representation, can we conclude that the representation encodes linguistic structure, or has the probe just learned the task?” Information-theoretically, there is no difference between learning the task and probing for linguistic structure, as we will show; thus, it follows that one should always employ the best possible probe for the task without resorting to artificial constraints.
In support of our discussion, we empirically analyze word-level part-of-speech labeling, a common syntactic probing task BIBREF6, BIBREF11, within our framework. Working on a typologically diverse set of languages (Basque, Czech, English, Finnish, Tamil, and Turkish), we show that the representations from BERT, a common contextualized embedder, only account for at most $5\%$ more of the part-of-speech tag entropy than a control. These modest improvements suggest that most of the information needed to tag part-of-speech well is encoded at the lexical level, and does not require the sentential context of the word. Put more simply, words are not very ambiguous with respect to part of speech, a result known to practitioners of NLP BIBREF12. We interpret this to mean that part-of-speech labeling is not a very informative probing task.
We also remark that formulating probing information-theoretically gives us a simple, but stunning result: contextual word embeddings, e.g., BERT BIBREF13 and ELMo BIBREF14, contain the same amount of information about the linguistic property of interest as the original sentence. This follows naturally from the data-processing inequality under a very mild assumption. What this suggests is that, in a certain sense, probing for linguistic properties in representations may not be a well grounded enterprise at all.
<<</Introduction>>>
<<<Word-Level Syntactic Probes for Contextual Embeddings>>>
Following hewitt-liang-2019-designing, we consider probes that examine syntactic knowledge in contextualized embeddings. These probes only consider a single token's embedding and try to perform the task using only that information. Specifically, in this work, we consider part-of-speech (POS) labeling: determining a word's part of speech in a given sentence. For example, we wish to determine whether the word love is a noun or a verb. This task requires the sentential context for success. As an example, consider the utterance “love is blind” where, only with the context, is it clear that love is a noun. Thus, to do well on this task, the contextualized embeddings need to encode enough about the surrounding context to correctly guess the POS.
<<<Notation>>>
Let $S$ be a random variable ranging over all possible sequences of words. For the sake of this paper, we assume the vocabulary $\mathcal {V}$ is finite and, thus, the values $S$ can take are in $\mathcal {V}^*$. We write $\mathbf {s}\in S$ as $\mathbf {s}= w_1 \cdots w_{|\mathbf {s}|}$ for a specific sentence, where each $w_i \in \mathcal {V}$ is a specific word in the sentence and the position $i \in \mathbb {N}^{+}$. We also define the random variable $W$ that ranges over the vocabulary $\mathcal {V}$. We define both a sentence-level random variable $S$ and a word-level random variable $W$ since each will be useful in different contexts during our exposition.
Next, let $T$ be a random variable whose possible values are the analyses $t$ that we want to consider for word $w_i$ in its sentential context, $\mathbf {s}= w_1 \cdots w_i \cdots w_{|\mathbf {s}|}$. In this work, we will focus on predicting the part-of-speech tag of the $i^\text{th}$ word $w_i$. We denote the set of values $T$ can take as the set $\mathcal {T}$. Finally, let $R$ be a representation-valued random variable for the $i^\text{th}$ word $w_i$ in a sentence derived from the entire sentence $\mathbf {s}$. We write $\mathbf {r}\in \mathbb {R}^d$ for a value of $R$. While any given value $\mathbf {r}$ is a continuous vector, there are only a countable number of values $R$ can take. To see this, note there are only a countable number of sentences in $\mathcal {V}^*$.
Next, we assume there exists a true distribution $p(t, \mathbf {s}, i)$ over analyses $t$ (elements of $\mathcal {T}$), sentences $\mathbf {s}$ (elements of $\mathcal {V}^*$), and positions $i$ (elements of $\mathbb {N}^{+}$). Note that the conditional distribution $p(t \mid \mathbf {s}, i)$ gives us the true distribution over analyses $t$ for the $i^{\text{th}}$ word in the sentence $\mathbf {s}$. We will augment this distribution such that $p$ is additionally a distribution over $\mathbf {r}$, i.e.,
where we define the augmentation as a Dirac's delta function
Since contextual embeddings are a deterministic function of a sentence $\mathbf {s}$, the augmented distribution in eq:true has no more randomness than the original—its entropy is the same. We assume the values of the random variables defined above are distributed according to this (unknown) $p$. While we do not have access to $p$, we assume the data in our corpus were drawn according to it. Note that $W$—the random variable over possible words—is distributed according to the marginal distribution
where we define the deterministic distribution
<<</Notation>>>
<<<Probing as Mutual Information>>>
The task of supervised probing is an attempt to ascertain how much information a specific representation $\mathbf {r}$ tells us about the value of $t$. This is naturally expressed as the mutual information, a quantity from information theory:
where we define the entropy, which is constant with respect to the representations, as
and where we define the conditional entropy as
the point-wise conditional entropy inside the sum is defined as
Again, we will not know any of the distributions required to compute these quantities; the distributions in the formulae are marginals and conditionals of the true distribution discussed in eq:true.
<<</Probing as Mutual Information>>>
<<<Bounding Mutual Information>>>
The desired conditional entropy, $\mathrm {H}(T \mid R)$ is not readily available, but with a model $q_{{\theta }}(\mathbf {t}\mid \mathbf {r})$ in hand, we can upper-bound it by measuring their empirical cross entropy
where $\mathrm {H}_{q_{{\theta }}}(T \mid R)$ is the cross-entropy we obtain by using $q_{{\theta }}$ to get this estimate. Since the KL divergence is always positive, we may lower-bound the desired mutual information
This bound gets tighter, the more similar (in the sense of the KL divergence) $q_{{\theta }}(\cdot \mid \mathbf {r})$ is to the true distribution $p(\cdot \mid \mathbf {r})$.
<<<Bigger Probes are Better.>>>
If we accept mutual information as a natural measure for how much representations encode a target linguistic task (§SECREF6), then the best estimate of that mutual information is the one where the probe $q_{{\theta }}(t \mid \mathbf {r})$ is best at the target task. In other words, we want the best probe $q_{{\theta }}(t \mid \mathbf {r})$ such that we get the tightest bound to the actual distribution $p(t\mid \mathbf {r})$. This paints the question posed by hewitt-liang-2019-designing, who write “when a probe achieves high accuracy on a linguistic task using a representation, can we conclude that the representation encodes linguistic structure, or has the probe just learned the task?” as a false dichotomy. From an information-theoretic view, we will always prefer the probe that does better at the target task, since there is no difference between learning a task and the representations encoding the linguistic structure.
<<</Bigger Probes are Better.>>>
<<</Bounding Mutual Information>>>
<<</Word-Level Syntactic Probes for Contextual Embeddings>>>
<<<Control Functions>>>
To place the performance of a probe in perspective, hewitt-liang-2019-designing develop the notion of a control task. Inspired by this, we develop an analogue we term control functions, which are functions of the representation-valued random variable $R$. Similar to hewitt-liang-2019-designing's control tasks, the goal of a control function $\mathbf {c}(\cdot )$ is to place the mutual information $\mathrm {I}(T; R)$ in the context of a baseline that the control function encodes. Control functions have their root in the data-processing inequality BIBREF7, which states that, for any function $\mathbf {c}(\cdot )$, we have
In other words, information can only be lost by processing data. A common adage associated with this inequality is “garbage in, garbage out.”
<<<Type-Level Control Functions>>>
We will focus on type-level control functions in this paper; these functions have the effect of decontextualizing the embeddings. Such functions allow us to inquire how much the contextual aspect of the contextual embeddings help the probe perform the target task. To show that we may map from contextual embeddings to the identity of the word type, we need the following assumption about the embeddings.
Assumption 1 Every contextualized embedding is unique, i.e., for any pair of sentences $\mathbf {s}, \mathbf {s}^{\prime } \in \mathcal {V}^*$, we have $(\mathbf {s}\ne \mathbf {s}^{\prime }) \mid \mid (i \ne j) \Rightarrow \textsc {bert} (\mathbf {s})_i \ne \textsc {bert} (\mathbf {s}^{\prime })_j$ for all $i \in \lbrace 1, \ldots |\mathbf {s}|\rbrace $ and $j \in \lbrace 1, \ldots , |\mathbf {s}^{\prime }|\rbrace $.
We note that ass:one is mild. Contextualized word embeddings map words (in their context) to $\mathbb {R}^d$, which is an uncountably infinite space. However, there are only a countable number of sentences, which implies only a countable number of sequences of real vectors in $\mathbb {R}^d$ that a contextualized embedder may produce. The event that any two embeddings would be the same across two distinct sentences is infinitesimally small. ass:one yields the following corollary.
Corollary 1 There exists a function $\emph {\texttt {id} } : \mathbb {R}^d \rightarrow V$ that maps a contextualized embedding to its word type. The function $\emph {\texttt {id} }$ is not a bijection since multiple embeddings will map to the same type.
Using cor:one, we can show that any non-contextualized word embedding will contain no more information than a contextualized word embedding. More formally, we do this by constructing a look-up function $\mathbf {e}: V \rightarrow \mathbb {R}^d$ that maps a word to a word embedding. This embedding may be one-hot, randomly generated ahead of time, or the output of a data-driven embedding method, e.g. fastText BIBREF15. We can then construct a control function as the composition of the look-up function $\mathbf {e}$ and the id function $\texttt {id} $. Using the data-processing inequality, we can prove that in a word-level prediction task, any non-contextual (type level) word-embedding will contain no more information than a contextualized (token level) one, such as BERT and ELMo. Specifically, we have
This result is intuitive and, perhaps, trivial—context matters information-theoretically. However, it gives us a principled foundation by which to measure the effectiveness of probes as we will show in sec:gain.
<<</Type-Level Control Functions>>>
<<<How Much Information Did We Gain?>>>
We will now quantify how much a contextualized word embedding knows about a task with respect to a specific control function $\mathbf {c}(\cdot )$. We term how much more information the contextualized embeddings have about a task than a control variable the gain, which we define as
The gain function will be our method for measuring how much more information contextualized representations have over a controlled baseline, encoded as the function $\mathbf {c}$. We will empirically estimate this value in sec:experiments.
Interestingly enough, the gain has a straightforward interpretation.
Proposition 1 The gain function is equal to the following conditional mutual information
The jump from the first to the second equality follows since $R$ encodes all the information about $T$ provided by $\mathbf {c}(R)$ by construction. prop:interpretation gives us a clear understanding of the quantity we wish to estimate: It is how much information about a task is encoded in the representations, given some control knowledge. If properly designed, this control transformation will remove information from the probed representations.
<<</How Much Information Did We Gain?>>>
<<<Approximating the Gain>>>
The gain, as defined in eq:gain, is intractable to compute. In this section we derive a pair of variational bounds on $\mathcal {G}(T, R, \mathbf {e})$—one upper and one lower. To approximate the gain, we will simultaneously minimize an upper and a lower-bound on eq:gain. We begin by approximating the gain in the following manner
these cross-entropies can be empirically estimated. We will assume access to a corpus $\lbrace (t_i, \mathbf {r}_i)\rbrace _{i=1}^N$ that is human-annotated for the target linguistic property; we further assume that these are samples $(t_i, \mathbf {r}_i) \sim p(\cdot , \cdot )$ from the true distribution. This yields a second approximation that is tractable:
This approximation is exact in the limit $N \rightarrow \infty $ by the law of large numbers.
We note the approximation given in eq:approx may be either positive or negative and its estimation error follows from eq:entestimate
where we abuse the KL notation to simplify the equation. This is an undesired behavior since we know the gain itself is non-negative, by the data-processing inequality, but we have yet to devise a remedy.
We justify the approximation in eq:approx with a pair of variational bounds. The following two corollaries are a result of thm:variationalbounds in appendix:a.
Corollary 2 We have the following upper-bound on the gain
Corollary 3 We have the following lower-bound on the gain
The conjunction of cor:upper and cor:lower suggest a simple procedure for finding a good approximation: We choose $q_{{\theta }1}(\cdot \mid r)$ and $q_{{\theta }2}(\cdot \mid r)$ so as to minimize eq:upper and maximize eq:lower, respectively. These distributions contain no overlapping parameters, by construction, so these two optimization routines may be performed independently. We will optimize both with a gradient-based procedure, discussed in sec:experiments.
<<</Approximating the Gain>>>
<<</Control Functions>>>
<<<Understanding Probing Information-Theoretically>>>
In sec:control-functions we developed an information-theoretic framework for thinking about probing contextual word embeddings for linguistic structure. However, we now cast doubt on whether probing makes sense as a scientific endeavour. We prove in sec:context that contextualized word embeddings, by construction, contain no more information about a word-level syntactic task than the original sentence itself. Nevertheless, we do find a meaningful scientific interpretation of control functions. We expound upon this in sec:control-functions-meaning, arguing that control functions are useful, not for understanding representations, but rather for understanding the influence of sentential context on word-level syntactic tasks, e.g., labeling words with their part of speech.
<<<You Know Nothing, BERT>>>
To start, we note the following corollary
Corollary 4 It directly follows from ass:one that $\textsc {bert} $ is a bijection between sentences $\mathbf {s}$ and sequences of embeddings $\langle \mathbf {r}_1, \ldots , \mathbf {r}_{|\mathbf {s}|} \rangle $. As $\textsc {bert} $ is a bijection, it has an inverse, which we will denote as $\textsc {bert}^{-1} $.
Theorem 1 The function $\textsc {bert} (S)$ cannot provide more information about $T$ than the sentence $S$ itself.
This implies $\mathrm {I}(T ; S) = \mathrm {I}(T; \textsc {bert} (S))$. We remark this is not a BERT-specific result—it rests on the fact that the data-processing inequality is tight for bijections. While thm:bert is a straightforward application of the data-processing inequality, it has deeper ramifications for probing. It means that if we search for syntax in the contextualized word embeddings of a sentence, we should not expect to find any more syntax than is present in the original sentence. In a sense, thm:bert is a cynical statement: the endeavour of finding syntax in contextualized embeddings sentences is nonsensical. This is because, under ass:one, we know the answer a priori—the contextualized word embeddings of a sentence contain exactly the same amount of information about syntax as does the sentence itself.
<<</You Know Nothing, BERT>>>
<<<What Do Control Functions Mean?>>>
Information-theoretically, the interpretation of control functions is also interesting. As previously noted, our interpretation of control functions in this work does not provide information about the representations themselves. Actually, the same reasoning used in cor:one could be used to devise a function $\texttt {id} _s(\mathbf {r})$ which led from a single representation back to the whole sentence. For a type-level control function $\mathbf {c}$, by the data-processing inequality, we have that $\mathrm {I}(T; W) \ge \mathrm {I}(T; \mathbf {c}(R))$. Consequently, we can get an upper-bound on how much information we can get out of a decontextualized representation. If we assume we have perfect probes, then we get that the true gain function is $\mathrm {I}(T; S) - \mathrm {I}(T; W) = \mathrm {I}(T; S \mid W)$. This quantity is interpreted as the amount of knowledge we gain about the word-level task $T$ by knowing $S$ (i.e., the sentence) in addition to $W$ (i.e., the word). Therefore, a perfect probe would provide insights about language and not about the actual representations, which are no more than a means to an end.
<<</What Do Control Functions Mean?>>>
<<<Discussion: Ease of Extraction>>>
We do acknowledge another interpretation of the work of hewitt-liang-2019-designing inter alia; BERT makes the syntactic information present in an ordered sequence of words more easily extractable. However, ease of extraction is not a trivial notion to formalize, and indeed, we know of no attempt to do so; it is certainly more complex to determine than the number of layers in a multi-layer perceptron (MLP). Indeed, a MLP with a single hidden layer can represent any function over the unit cube, with the caveat that we may need a very large number of hidden units BIBREF16.
Although for perfect probes the above results should hold, in practice $\texttt {id} (\cdot )$ and $\mathbf {c}(\cdot )$ may be hard to approximate. Furthermore, if these functions were to be learned, they might require an unreasonably large dataset. A random embedding control function, for example, would require an infinitely large dataset to be learned—or at least one that contained all words in the vocabulary $V$. “Better” representations should make their respective probes more easily learnable—and consequently their encoded information more accessible.
We suggest that future work on probing should focus on operationalizing ease of extraction more rigorously—even though we do not attempt this ourselves. The advantage of simple probes is that they may reveal something about the structure of the encoded information—i.e., is it structured in such a way that it can be easily taken advantage of by downstream consumers of the contextualized embeddings? We suspect that many researchers who are interested in less complex probes have implicitly had this in mind.
<<</Discussion: Ease of Extraction>>>
<<</Understanding Probing Information-Theoretically>>>
<<<A Critique of Control Tasks>>>
While this paper builds on the work of hewitt-liang-2019-designing, and we agree with them that we should have control tasks when probing for linguistic properties, we disagree with parts of the methodology for the control task construction. We present these disagreements here.
<<<Structure and Randomness>>>
hewitt-liang-2019-designing introduce control tasks to evaluate the effectiveness of probes. We draw inspiration from this technique as evidenced by our introduction of control functions. However, we take issue with the suggestion that controls should have structure and randomness, to use the terminology from hewitt-liang-2019-designing. They define structure as “the output for a word token is a deterministic function of the word type.” This means that they are stripping the language of ambiguity with respect to the target task. In the case of part-of-speech labeling, love would either be a noun or a verb in a control task, never both: this is a problem. The second feature of control tasks is randomness, i.e., “the output for each word type is sampled independently at random.” In conjunction, structure and randomness may yield a relatively trivial task that does not look at all like natural language.
What is more, there is a closed-form solution for an optimal, retrieval-based “probe” that has zero parameters: If a word type appears in the training set, return the label with which it was annotated there, otherwise return the most frequently occurring label across all words in the training set. This probe will achieve an accuracy that is 1 minus the out-of-vocabulary rate (the number of tokens in the test set that correspond to novel types divided by the number of tokens) times the percentage of tags in the test set that do not correspond to the most frequent tag (the error rate of the guess-the-most-frequent-tag classifier). In short, the best model for a control task is a pure memorizer that guesses the most frequent tag for out-of-vocabulary words.
<<</Structure and Randomness>>>
<<<What's Wrong with Memorization?>>>
hewitt-liang-2019-designing propose that probes should be optimised to maximise accuracy and selectivity. Recall selectivity is given by the distance between the accuracy on the original task and the accuracy on the control task using the same architecture. Given their characterization of control tasks, maximising selectivity leads to a selection of a model that is bad at memorization. But why should we punish memorization? Much of linguistic competence is about generalization, however memorization also plays a key role BIBREF17, BIBREF18, BIBREF19, with word learning BIBREF20 being an obvious example. Indeed, maximizing selectivity as a criterion for creating probes seems to artificially disfavor this property.
<<</What's Wrong with Memorization?>>>
<<<What Low-Selectivity Means>>>
hewitt-liang-2019-designing acknowledge that for the more complex task of dependency edge prediction, a MLP probe is more accurate and, therefore, preferable despite its low selectivity. However, they offer two counter-examples where the less selective neural probe exhibits drawbacks when compared to its more selective, linear counterpart. We believe both examples are a symptom of using a simple probe rather than of selectivity being a useful metric for probe selection. First, [§3.6]hewitt-liang-2019-designing point out that, in their experiments, the MLP-1 model frequently mislabels the word with suffix -s as NNPS on the POS labeling task. They present this finding as a possible example of a less selective probe being less faithful in representing what linguistic information has the model learned. Our analysis leads us to believe that, on contrary, this shows that one should be using the best possible probe to minimize the chance of misrepresentation. Since more complex probes achieve higher accuracy on the task, as evidence by the findings of hewitt-liang-2019-designing, we believe that the overall trend of misrepresentation is higher for the probes with higher selectivity. The same applies for the second example discussed in section [§4.2]hewitt-liang-2019-designing where a less selective probe appears to be less faithful. The authors show that the representations on ELMo's second layer fail to outperform its word type ones (layer zero) on the POS labeling task when using the MLP-1 probe. While they argue this is evidence for selectivity being a useful metric in choosing appropriate probes, we argue that this demonstrates yet again that one needs to use a more complex probe to minimize the chances of misrepresenting what the model has learned. The fact that the linear probe shows a difference only demonstrates that the information is perhaps more accessible with ELMo, not that it is not present; see sec:ease-extract.
<<</What Low-Selectivity Means>>>
<<</A Critique of Control Tasks>>>
<<<Experiments>>>
We consider the task of POS labeling and use the universal POS tag information BIBREF21 from the Universal Dependencies 2.4 BIBREF22. We probe the multilingual release of BERT on six typologically diverse languages: Basque, Czech, English, Finnish, Tamil, and Turkish; and we compute the contextual representations of each sentence by feeding it into BERT and averaging the output word piece representations for each word, as tokenized in the treebank.
<<<Probe Architecture>>>
As expounded upon above, our purpose is to achieve the best bound on mutual information we can. To this end, we employ a deep MLP as our probe. We define the probe as
an $m$-layer neural network with the non-linearity $\sigma (\cdot ) = \mathrm {ReLU}(\cdot )$. The initial projection matrix is $W^{(1)} \in \mathbb {R}^{r_1 \times d}$ and the final projection matrix is $W^{(m)} \in \mathbb {R}^{|\mathcal {T}| \times r_{m-1}}$, where $r_i=\frac{r}{2^{i-1}}$. The remaining matrices are $W^{(i)} \in \mathbb {R}^{r_i \times r_{i-1}}$, so we half the number of hidden states in each layer. We optimize over the hyperparameters—number of layers, hidden size, one-hot embedding size, and dropout—by using random search. For each estimate, we train 50 models and choose the one with the best validation cross-entropy. The cross-entropy in the test set is then used as our entropy estimate.
<<</Probe Architecture>>>
<<<Results>>>
We know $\textsc {bert} $ can generate text in many languages, here we assess how much does it actually know about syntax in those languages. And how much more does it know than simple type-level baselines. tab:results-full presents this results, showing how much information $\textsc {bert} $, fastText and onehot embeddings encode about POS tagging. We see that—in all analysed languages—type level embeddings can already capture most of the uncertainty in POS tagging. We also see that BERT only shares a small amount of extra information with the task, having small (or even negative) gains in all languages.
$\textsc {bert} $ presents negative gains in some of the analysed languages. Although this may seem to contradict the information processing inequality, it is actually caused by the difficulty of approximating $\texttt {id} $ and $\mathbf {c}(\cdot )$ with a finite training set—causing $\mathrm {KL}_{q_{{\theta }1}}(T \mid R)$ to be larger than $\mathrm {KL}_{q_{{\theta }2}}(T \mid \mathbf {c}(R))$. We believe this highlights the need to formalize ease of extraction, as discussed in sec:ease-extract.
Finally, when put into perspective, multilingual $\textsc {bert} $'s representations do not seem to encode much more information about syntax than a trivial baseline. $\textsc {bert} $ only improves upon fastText in three of the six analysed languages—and even in those, it encodes at most (in English) $5\%$ additional information.
<<</Results>>>
<<</Experiments>>>
<<<Conclusion>>>
We proposed an information-theoretic formulation of probing: we define probing as the task of estimating conditional mutual information. We introduce control functions, which allows us to put the amount of information encoded in contextual representations in the context of knowledge judged to be trivial. We further explored this formalization and showed that, given perfect probes, probing can only yield insights into the language itself and tells us nothing about the representations under investigation. Keeping this in mind, we suggested a change of focus—instead of focusing on probe size or information, we should look at ease of extraction going forward.
On another note, we apply our formalization to evaluate multilingual $\textsc {bert} $'s syntax knowledge on a set of six typologically diverse languages. Although it does encode a large amount of information about syntax (more than $81\%$ in all languages), it only encodes at most $5\%$ more information than some trivial baseline knowledge (a type-level representation). This indicates that the task of POS labeling (word-level POS tagging) is not an ideal task for contemplating the syntactic understanding of contextual word embeddings.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Abstract, A Critique of Control Tasks"
],
"type": "disordered_section"
}
|
1908.08566
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Unsupervised Text Summarization via Mixed Model Back-Translation
<<<Abstract>>>
Back-translation based approaches have recently lead to significant progress in unsupervised sequence-to-sequence tasks such as machine translation or style transfer. In this work, we extend the paradigm to the problem of learning a sentence summarization system from unaligned data. We present several initial models which rely on the asymmetrical nature of the task to perform the first back-translation step, and demonstrate the value of combining the data created by these diverse initialization methods. Our system outperforms the current state-of-the-art for unsupervised sentence summarization from fully unaligned data by over 2 ROUGE, and matches the performance of recent semi-supervised approaches.
<<</Abstract>>>
<<<Introduction>>>
Machine summarization systems have made significant progress in recent years, especially in the domain of news text. This has been made possible among other things by the popularization of the neural sequence-to-sequence (seq2seq) paradigm BIBREF0, BIBREF1, BIBREF2, the development of methods which combine the strengths of extractive and abstractive approaches to summarization BIBREF3, BIBREF4, and the availability of large training datasets for the task, such as Gigaword or the CNN-Daily Mail corpus which comprise of over 3.8M shorter and 300K longer articles and aligned summaries respectively. Unfortunately, the lack of datasets of similar scale for other text genres remains a limiting factor when attempting to take full advantage of these modeling advances using supervised training algorithms.
In this work, we investigate the application of back-translation to training a summarization system in an unsupervised fashion from unaligned full text and summaries corpora. Back-translation has been successfully applied to unsupervised training for other sequence to sequence tasks such as machine translation BIBREF5 or style transfer BIBREF6. We outline the main differences between these settings and text summarization, devise initialization strategies which take advantage of the asymmetrical nature of the task, and demonstrate the advantage of combining varied initializers. Our approach outperforms the previous state-of-the-art on unsupervised text summarization while using less training data, and even matches the rouge scores of recent semi-supervised methods.
<<</Introduction>>>
<<<Related Work>>>
BIBREF7's work on applying neural seq2seq systems to the task of text summarization has been followed by a number of works improving upon the initial model architecture. These have included changing the base encoder structure BIBREF8, adding a pointer mechanism to directly re-use input words in the summary BIBREF9, BIBREF3, or explicitly pre-selecting parts of the full text to focus on BIBREF4. While there have been comparatively few attempts to train these models with less supervision, auto-encoding based approaches have met some success BIBREF10, BIBREF11.
BIBREF10's work endeavors to use summaries as a discrete latent variable for a text auto-encoder. They train a system on a combination of the classical log-likelihood loss of the supervised setting and a reconstruction objective which requires the full text to be mostly recoverable from the produced summary. While their method is able to take advantage of unlabelled data, it relies on a good initialization of the encoder part of the system which still needs to be learned on a significant number of aligned pairs. BIBREF11 expand upon this approach by replacing the need for supervised data with adversarial objectives which encourage the summaries to be structured like natural language, allowing them to train a system in a fully unsupervised setting from unaligned corpora of full text and summary sequences. Finally, BIBREF12 uses a general purpose pre-trained text encoder to learn a summarization system from fewer examples. Their proposed MASS scheme is shown to be more efficient than BERT BIBREF13 or Denoising Auto-Encoders (DAE) BIBREF14, BIBREF15.
This work proposes a different approach to unsupervised training based on back-translation. The idea of using an initial weak system to create and iteratively refine artificial training data for a supervised algorithm has been successfully applied to semi-supervised BIBREF16 and unsupervised machine translation BIBREF5 as well as style transfer BIBREF6. We investigate how the same general paradigm may be applied to the task of summarizing text.
<<</Related Work>>>
<<<Mixed Model Back-Translation>>>
Let us consider the task of transforming a sequence in domain $A$ into a corresponding sequence in domain $B$ (e.g. sentences in two languages for machine translation). Let $\mathcal {D}_A$ and $\mathcal {D}_B$ be corpora of sequences in $A$ and $B$, without any mapping between their respective elements. The back-translation approach starts with initial seq2seq models $f^0_{A \rightarrow B}$ and $f^0_{B \rightarrow A}$, which can be hand-crafted or learned without aligned pairs, and uses them to create artificial aligned training data:
Let $\mathcal {S}$ denote a supervised learning algorithm, which takes a set of aligned sequence pairs and returns a mapping function. This artificial data can then be used to train the next iteration of seq2seq models, which in turn are used to create new artificial training sets ($A$ and $B$ can be switched here):
The model is trained at each iteration on artificial inputs and real outputs, then used to create new training inputs. Thus, if the initial system isn't too far off, we can hope that training pairs get closer to the true data distribution with each step, allowing in turn to train better models.
In the case of summarization, we consider the domains of full text sequences $\mathcal {D}^F$ and of summaries $\mathcal {D}^S$, and attempt to learn summarization ($f_{F\rightarrow S}$) and expansion ($f_{S\rightarrow F}$) functions. However, contrary to the translation case, $\mathcal {D}^F$ and $\mathcal {D}^S$ are not interchangeable. Considering that a summary typically has less information than the corresponding full text, we choose to only define initial ${F\rightarrow S}$ models. We can still follow the proposed procedure by alternating directions at each step.
<<<Initialization Models for Summarization>>>
To initiate their process for the case of machine translation, BIBREF5 use two different initialization models for their neural (NMT) and phrase-based (PBSMT) systems. The former relies on denoising auto-encoders in both languages with a shared latent space, while the latter uses the PBSMT system of BIBREF17 with a phrase table obtained through unsupervised vocabulary alignment as in BIBREF18.
While both of these methods work well for machine translation, they rely on the input and output having similar lengths and information content. In particular, the statistical machine translation algorithm tries to align most input tokens to an output word. In the case of text summarization, however, there is an inherent asymmetry between the full text and the summaries, since the latter express only a subset of the former. Next, we propose three initialization systems which implicitly model this information loss. Full implementation details are provided in the Appendix.
<<<Procrustes Thresholded Alignment (Pr-Thr)>>>
The first initialization is similar to the one for PBSMT in that it relies on unsupervised vocabulary alignment. Specifically, we train two skipgram word embedding models using fasttext BIBREF19 on $\mathcal {D}^F$ and $\mathcal {D}^S$, then align them in a common space using the Wasserstein Procrustes method of BIBREF18. Then, we map each word of a full text sequence to its nearest neighbor in the aligned space if their distance is smaller than some threshold, or skip it otherwise. We also limit the output length, keeping only the first $N$ tokens. We refer to this function as $f_{F\rightarrow S}^{(\text{Pr-Thr}), 0}$.
<<</Procrustes Thresholded Alignment (Pr-Thr)>>>
<<<Denoising Bag-of-Word Auto-Encoder (DBAE)>>>
Similarly to both BIBREF5 and BIBREF11, we also devise a starting model based on a DAE. One major difference is that we use a simple Bag-of-Words (BoW) encoder with fixed pre-trained word embeddings, and a 2-layer GRU decoder. Indeed, we find that a BoW auto-encoder trained on the summaries reaches a reconstruction rouge-l f-score of nearly 70% on the test set, indicating that word presence information is mostly sufficient to model the summaries. As for the noise model, for each token in the input, we remove it with probability $p/2$ and add a word drawn uniformly from the summary vocabulary with probability $p$.
The BoW encoder has two advantages. First, it lacks the other models' bias to keep the word order of the full text in the summary. Secondly, when using the DBAE to predict summaries from the full text, we can weight the input word embeddings by their corpus-level probability of appearing in a summary, forcing the model to pay less attention to words that only appear in $\mathcal {D}^F$. The Denoising Bag-of-Words Auto-Encoder with input re-weighting is referred to as $f_{F\rightarrow S}^{(\text{DBAE}), 0}$.
<<</Denoising Bag-of-Word Auto-Encoder (DBAE)>>>
<<<First-Order Word Moments Matching (@!START@$\mathbf {\mu }$@!END@:1)>>>
We also propose an extractive initialization model. Given the same BoW representation as for the DBAE, function $f_\theta ^\mu (s, v)$ predicts the probability that each word $v$ in a full text sequence $s$ is present in the summary. We learn the parameters of $f_\theta ^\mu $ by marginalizing the output probability of each word over all full text sequences, and matching these first-order moments to the marginal probability of each word's presence in a summary. That is, let $\mathcal {V}^S$ denote the vocabulary of $\mathcal {D}^S$, then $\forall v \in \mathcal {V}^S$:
We minimize the binary cross-entropy (BCE) between the output and summary moments:
We then define an initial extractive summarization model by applying $f_{\theta ^*}^\mu (\cdot , \cdot )$ to all words of an input sentence, and keeping the ones whose output probability is greater than some threshold. We refer to this model as $f_{F\rightarrow S}^{(\mathbf {\mu }:1), 0}$.
<<</First-Order Word Moments Matching (@!START@$\mathbf {\mu }$@!END@:1)>>>
<<</Initialization Models for Summarization>>>
<<<Artificial Training Data>>>
We apply the back-translation procedure outlined above in parallel for all three initialization models. For example, $f_{F\rightarrow S}^{(\mathbf {\mu }:1), 0}$ yields the following sequence of models and artificial aligned datasets:
Finally, in order to take advantage of the various strengths of each of the initialization models, we also concatenate the artificial training dataset at each odd iteration to train a summarizer, e.g.:
<<</Artificial Training Data>>>
<<</Mixed Model Back-Translation>>>
<<<Experiments>>>
<<<Data and Model Choices>>>
We validate our approach on the Gigaword corpus, which comprises of a training set of 3.8M article headlines (considered to be the full text) and titles (summaries), along with 200K validation pairs, and we report test performance on the same 2K set used in BIBREF7. Since we want to learn systems from fully unaligned data without giving the model an opportunity to learn an implicit mapping, we also further split the training set into 2M examples for which we only use titles, and 1.8M for headlines. All models after the initialization step are implemented as convolutional seq2seq architectures using Fairseq BIBREF20. Artificial data generation uses top-15 sampling, with a minimum length of 16 for full text and a maximum length of 12 for summaries. rouge scores are obtained with an output vocabulary of size 15K and a beam search of size 5 to match BIBREF11.
<<</Data and Model Choices>>>
<<<Initializers>>>
Table TABREF9 compares test ROUGE for different initialization models, as well as the trivial Lead-8 baseline which simply copies the first 8 words of the article. We find that simply thresholding on distance during the word alignment step of (Pr-Thr) does slightly better then the full PBSMT system used by BIBREF5. Our BoW denoising auto-encoder with word re-weighting also performs significantly better than the full seq2seq DAE initialization used by BIBREF11 (Pre-DAE). The moments-based initial model ($\mathbf {\mu }$:1) scores higher than either of these, with scores already close to the full unsupervised system of BIBREF11.
In order to investigate the effect of these three different strategies beyond their rouge statistics, we show generations of the three corresponding first iteration expanders for a given summary in Table TABREF1. The unsupervised vocabulary alignment in (Pr-Thr) handles vocabulary shift, especially changes in verb tenses (summaries tend to be in the present tense), but maintains the word order and adds very little information. Conversely, the ($\mathbf {\mu }$:1) expansion function, which is learned from purely extractive summaries, re-uses most words in the summary without any change and adds some new information. Finally, the auto-encoder based (DBAE) significantly increases the sequence length and variety, but also strays from the original meaning (more examples in the Appendix). The decoders also seem to learn facts about the world during their training on article text (EDF/GDF is France's public power company).
<<</Initializers>>>
<<<Full Models>>>
Finally, Table TABREF13 compares the summarizers learned at various back-translation iterations to other unsupervised and semi-supervised approaches. Overall, our system outperforms the unsupervised Adversarial-reinforce of BIBREF11 after one back-translation loop, and most semi-supervised systems after the second one, including BIBREF12's MASS pre-trained sentence encoder and BIBREF10's Forced-attention Sentence Compression (FSC), which use 100K and 500K aligned pairs respectively. As far as back-translation approaches are concerned, we note that the model performances are correlated with the initializers' scores reported in Table TABREF9 (iterations 4 and 6 follow the same pattern). In addition, we find that combining data from all three initializers before training a summarizer system at each iteration as described in Section SECREF8 performs best, suggesting that the greater variety of artificial full text does help the model learn.
<<</Full Models>>>
<<<Conclusion>>>
In this work, we use the back-translation paradigm for unsupervised training of a summarization system. We find that the model benefits from combining initializers, matching the performance of semi-supervised approaches.
<<</Conclusion>>>
<<</Experiments>>>
<<</Title>>>
|
{
"references": [
"Introduction, Experiments"
],
"type": "disordered_section"
}
|
1908.08566
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Unsupervised Text Summarization via Mixed Model Back-Translation
<<<Abstract>>>
Back-translation based approaches have recently lead to significant progress in unsupervised sequence-to-sequence tasks such as machine translation or style transfer. In this work, we extend the paradigm to the problem of learning a sentence summarization system from unaligned data. We present several initial models which rely on the asymmetrical nature of the task to perform the first back-translation step, and demonstrate the value of combining the data created by these diverse initialization methods. Our system outperforms the current state-of-the-art for unsupervised sentence summarization from fully unaligned data by over 2 ROUGE, and matches the performance of recent semi-supervised approaches.
<<</Abstract>>>
<<<Introduction>>>
Machine summarization systems have made significant progress in recent years, especially in the domain of news text. This has been made possible among other things by the popularization of the neural sequence-to-sequence (seq2seq) paradigm BIBREF0, BIBREF1, BIBREF2, the development of methods which combine the strengths of extractive and abstractive approaches to summarization BIBREF3, BIBREF4, and the availability of large training datasets for the task, such as Gigaword or the CNN-Daily Mail corpus which comprise of over 3.8M shorter and 300K longer articles and aligned summaries respectively. Unfortunately, the lack of datasets of similar scale for other text genres remains a limiting factor when attempting to take full advantage of these modeling advances using supervised training algorithms.
In this work, we investigate the application of back-translation to training a summarization system in an unsupervised fashion from unaligned full text and summaries corpora. Back-translation has been successfully applied to unsupervised training for other sequence to sequence tasks such as machine translation BIBREF5 or style transfer BIBREF6. We outline the main differences between these settings and text summarization, devise initialization strategies which take advantage of the asymmetrical nature of the task, and demonstrate the advantage of combining varied initializers. Our approach outperforms the previous state-of-the-art on unsupervised text summarization while using less training data, and even matches the rouge scores of recent semi-supervised methods.
<<</Introduction>>>
<<<Related Work>>>
BIBREF7's work on applying neural seq2seq systems to the task of text summarization has been followed by a number of works improving upon the initial model architecture. These have included changing the base encoder structure BIBREF8, adding a pointer mechanism to directly re-use input words in the summary BIBREF9, BIBREF3, or explicitly pre-selecting parts of the full text to focus on BIBREF4. While there have been comparatively few attempts to train these models with less supervision, auto-encoding based approaches have met some success BIBREF10, BIBREF11.
BIBREF10's work endeavors to use summaries as a discrete latent variable for a text auto-encoder. They train a system on a combination of the classical log-likelihood loss of the supervised setting and a reconstruction objective which requires the full text to be mostly recoverable from the produced summary. While their method is able to take advantage of unlabelled data, it relies on a good initialization of the encoder part of the system which still needs to be learned on a significant number of aligned pairs. BIBREF11 expand upon this approach by replacing the need for supervised data with adversarial objectives which encourage the summaries to be structured like natural language, allowing them to train a system in a fully unsupervised setting from unaligned corpora of full text and summary sequences. Finally, BIBREF12 uses a general purpose pre-trained text encoder to learn a summarization system from fewer examples. Their proposed MASS scheme is shown to be more efficient than BERT BIBREF13 or Denoising Auto-Encoders (DAE) BIBREF14, BIBREF15.
This work proposes a different approach to unsupervised training based on back-translation. The idea of using an initial weak system to create and iteratively refine artificial training data for a supervised algorithm has been successfully applied to semi-supervised BIBREF16 and unsupervised machine translation BIBREF5 as well as style transfer BIBREF6. We investigate how the same general paradigm may be applied to the task of summarizing text.
<<</Related Work>>>
<<<Mixed Model Back-Translation>>>
Let us consider the task of transforming a sequence in domain $A$ into a corresponding sequence in domain $B$ (e.g. sentences in two languages for machine translation). Let $\mathcal {D}_A$ and $\mathcal {D}_B$ be corpora of sequences in $A$ and $B$, without any mapping between their respective elements. The back-translation approach starts with initial seq2seq models $f^0_{A \rightarrow B}$ and $f^0_{B \rightarrow A}$, which can be hand-crafted or learned without aligned pairs, and uses them to create artificial aligned training data:
Let $\mathcal {S}$ denote a supervised learning algorithm, which takes a set of aligned sequence pairs and returns a mapping function. This artificial data can then be used to train the next iteration of seq2seq models, which in turn are used to create new artificial training sets ($A$ and $B$ can be switched here):
The model is trained at each iteration on artificial inputs and real outputs, then used to create new training inputs. Thus, if the initial system isn't too far off, we can hope that training pairs get closer to the true data distribution with each step, allowing in turn to train better models.
In the case of summarization, we consider the domains of full text sequences $\mathcal {D}^F$ and of summaries $\mathcal {D}^S$, and attempt to learn summarization ($f_{F\rightarrow S}$) and expansion ($f_{S\rightarrow F}$) functions. However, contrary to the translation case, $\mathcal {D}^F$ and $\mathcal {D}^S$ are not interchangeable. Considering that a summary typically has less information than the corresponding full text, we choose to only define initial ${F\rightarrow S}$ models. We can still follow the proposed procedure by alternating directions at each step.
<<<Initialization Models for Summarization>>>
To initiate their process for the case of machine translation, BIBREF5 use two different initialization models for their neural (NMT) and phrase-based (PBSMT) systems. The former relies on denoising auto-encoders in both languages with a shared latent space, while the latter uses the PBSMT system of BIBREF17 with a phrase table obtained through unsupervised vocabulary alignment as in BIBREF18.
While both of these methods work well for machine translation, they rely on the input and output having similar lengths and information content. In particular, the statistical machine translation algorithm tries to align most input tokens to an output word. In the case of text summarization, however, there is an inherent asymmetry between the full text and the summaries, since the latter express only a subset of the former. Next, we propose three initialization systems which implicitly model this information loss. Full implementation details are provided in the Appendix.
<<<Procrustes Thresholded Alignment (Pr-Thr)>>>
The first initialization is similar to the one for PBSMT in that it relies on unsupervised vocabulary alignment. Specifically, we train two skipgram word embedding models using fasttext BIBREF19 on $\mathcal {D}^F$ and $\mathcal {D}^S$, then align them in a common space using the Wasserstein Procrustes method of BIBREF18. Then, we map each word of a full text sequence to its nearest neighbor in the aligned space if their distance is smaller than some threshold, or skip it otherwise. We also limit the output length, keeping only the first $N$ tokens. We refer to this function as $f_{F\rightarrow S}^{(\text{Pr-Thr}), 0}$.
<<</Procrustes Thresholded Alignment (Pr-Thr)>>>
<<<Denoising Bag-of-Word Auto-Encoder (DBAE)>>>
Similarly to both BIBREF5 and BIBREF11, we also devise a starting model based on a DAE. One major difference is that we use a simple Bag-of-Words (BoW) encoder with fixed pre-trained word embeddings, and a 2-layer GRU decoder. Indeed, we find that a BoW auto-encoder trained on the summaries reaches a reconstruction rouge-l f-score of nearly 70% on the test set, indicating that word presence information is mostly sufficient to model the summaries. As for the noise model, for each token in the input, we remove it with probability $p/2$ and add a word drawn uniformly from the summary vocabulary with probability $p$.
The BoW encoder has two advantages. First, it lacks the other models' bias to keep the word order of the full text in the summary. Secondly, when using the DBAE to predict summaries from the full text, we can weight the input word embeddings by their corpus-level probability of appearing in a summary, forcing the model to pay less attention to words that only appear in $\mathcal {D}^F$. The Denoising Bag-of-Words Auto-Encoder with input re-weighting is referred to as $f_{F\rightarrow S}^{(\text{DBAE}), 0}$.
<<</Denoising Bag-of-Word Auto-Encoder (DBAE)>>>
<<<First-Order Word Moments Matching (@!START@$\mathbf {\mu }$@!END@:1)>>>
We also propose an extractive initialization model. Given the same BoW representation as for the DBAE, function $f_\theta ^\mu (s, v)$ predicts the probability that each word $v$ in a full text sequence $s$ is present in the summary. We learn the parameters of $f_\theta ^\mu $ by marginalizing the output probability of each word over all full text sequences, and matching these first-order moments to the marginal probability of each word's presence in a summary. That is, let $\mathcal {V}^S$ denote the vocabulary of $\mathcal {D}^S$, then $\forall v \in \mathcal {V}^S$:
We minimize the binary cross-entropy (BCE) between the output and summary moments:
We then define an initial extractive summarization model by applying $f_{\theta ^*}^\mu (\cdot , \cdot )$ to all words of an input sentence, and keeping the ones whose output probability is greater than some threshold. We refer to this model as $f_{F\rightarrow S}^{(\mathbf {\mu }:1), 0}$.
<<</First-Order Word Moments Matching (@!START@$\mathbf {\mu }$@!END@:1)>>>
<<</Initialization Models for Summarization>>>
<<<Artificial Training Data>>>
We apply the back-translation procedure outlined above in parallel for all three initialization models. For example, $f_{F\rightarrow S}^{(\mathbf {\mu }:1), 0}$ yields the following sequence of models and artificial aligned datasets:
Finally, in order to take advantage of the various strengths of each of the initialization models, we also concatenate the artificial training dataset at each odd iteration to train a summarizer, e.g.:
<<</Artificial Training Data>>>
<<</Mixed Model Back-Translation>>>
<<<Experiments>>>
<<<Data and Model Choices>>>
We validate our approach on the Gigaword corpus, which comprises of a training set of 3.8M article headlines (considered to be the full text) and titles (summaries), along with 200K validation pairs, and we report test performance on the same 2K set used in BIBREF7. Since we want to learn systems from fully unaligned data without giving the model an opportunity to learn an implicit mapping, we also further split the training set into 2M examples for which we only use titles, and 1.8M for headlines. All models after the initialization step are implemented as convolutional seq2seq architectures using Fairseq BIBREF20. Artificial data generation uses top-15 sampling, with a minimum length of 16 for full text and a maximum length of 12 for summaries. rouge scores are obtained with an output vocabulary of size 15K and a beam search of size 5 to match BIBREF11.
<<</Data and Model Choices>>>
<<<Initializers>>>
Table TABREF9 compares test ROUGE for different initialization models, as well as the trivial Lead-8 baseline which simply copies the first 8 words of the article. We find that simply thresholding on distance during the word alignment step of (Pr-Thr) does slightly better then the full PBSMT system used by BIBREF5. Our BoW denoising auto-encoder with word re-weighting also performs significantly better than the full seq2seq DAE initialization used by BIBREF11 (Pre-DAE). The moments-based initial model ($\mathbf {\mu }$:1) scores higher than either of these, with scores already close to the full unsupervised system of BIBREF11.
In order to investigate the effect of these three different strategies beyond their rouge statistics, we show generations of the three corresponding first iteration expanders for a given summary in Table TABREF1. The unsupervised vocabulary alignment in (Pr-Thr) handles vocabulary shift, especially changes in verb tenses (summaries tend to be in the present tense), but maintains the word order and adds very little information. Conversely, the ($\mathbf {\mu }$:1) expansion function, which is learned from purely extractive summaries, re-uses most words in the summary without any change and adds some new information. Finally, the auto-encoder based (DBAE) significantly increases the sequence length and variety, but also strays from the original meaning (more examples in the Appendix). The decoders also seem to learn facts about the world during their training on article text (EDF/GDF is France's public power company).
<<</Initializers>>>
<<<Full Models>>>
Finally, Table TABREF13 compares the summarizers learned at various back-translation iterations to other unsupervised and semi-supervised approaches. Overall, our system outperforms the unsupervised Adversarial-reinforce of BIBREF11 after one back-translation loop, and most semi-supervised systems after the second one, including BIBREF12's MASS pre-trained sentence encoder and BIBREF10's Forced-attention Sentence Compression (FSC), which use 100K and 500K aligned pairs respectively. As far as back-translation approaches are concerned, we note that the model performances are correlated with the initializers' scores reported in Table TABREF9 (iterations 4 and 6 follow the same pattern). In addition, we find that combining data from all three initializers before training a summarizer system at each iteration as described in Section SECREF8 performs best, suggesting that the greater variety of artificial full text does help the model learn.
<<</Full Models>>>
<<<Conclusion>>>
In this work, we use the back-translation paradigm for unsupervised training of a summarization system. We find that the model benefits from combining initializers, matching the performance of semi-supervised approaches.
<<</Conclusion>>>
<<</Experiments>>>
<<</Title>>>
|
{
"references": [
"Experiments, Introduction"
],
"type": "disordered_section"
}
|
1912.00955
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Dynamic Prosody Generation for Speech Synthesis using Linguistics-Driven Acoustic Embedding Selection
<<<Abstract>>>
Recent advances in Text-to-Speech (TTS) have improved quality and naturalness to near-human capabilities when considering isolated sentences. But something which is still lacking in order to achieve human-like communication is the dynamic variations and adaptability of human speech. This work attempts to solve the problem of achieving a more dynamic and natural intonation in TTS systems, particularly for stylistic speech such as the newscaster speaking style. We propose a novel embedding selection approach which exploits linguistic information, leveraging the speech variability present in the training dataset. We analyze the contribution of both semantic and syntactic features. Our results show that the approach improves the prosody and naturalness for complex utterances as well as in Long Form Reading (LFR).
<<</Abstract>>>
<<<Introduction>>>
Corresponding author email: [email protected]. Paper submitted to IEEE ICASSP 2020
Recent advances in TTS have improved the achievable synthetic speech naturalness to near human-like capabilities BIBREF0, BIBREF1, BIBREF2, BIBREF3. This means that for simple sentences, or for situations in which we can correctly predict the most appropriate prosodic representation, TTS systems are providing us with speech practically indistinguishable from that of humans.
One aspect that most systems are still lacking is the natural variability of human speech, which is being observed as one of the reasons why the cognitive load of synthetic speech is higher than that of humans BIBREF4. This is something that variational models such as those based on Variational Auto-Encoding (VAE) BIBREF3, BIBREF5 attempt to solve by exploiting the sampling capabilities of the acoustic embedding space at inference time.
Despite the advantages that VAE-based inference brings, it also suffers from the limitation that to synthesize a sample, one has to select an appropriate acoustic embedding for it, which can be challenging. A possible solution to this is to remove the selection process and consistently use a centroid to represent speech. This provides reliable acoustic representations but it suffers again from the monotonicity problem of conventional TTS. Another approach is to simply do a random sampling of the acoustic space. This would certainly solve the monotonicity problem if the acoustic embedding were varied enough. It can however, introduce erratic prosodic representations of longer texts, which can prove to be worse than being monotonous. Finally, one can consider text-based selection or prediction, as done in this research.
In this work, we present a novel approach for informed embedding selection using linguistic features. The tight relationship between syntactic constituent structure and prosody is well known BIBREF6, BIBREF7. In the traditional Natural Language Processing (NLP) pipeline, constituency parsing produces full syntactic trees. More recent approaches based on Contextual Word Embedding (CWE) suggest that CWE are largely able to implicitly represent the classic NLP pipeline BIBREF8, while still retaining the ability to model lexical semantics BIBREF9. Thus, in this work we explore how TTS systems can enhance the quality of speech synthesis by using such linguistic features to guide the prosodic contour of generated speech.
Similar relevant recent work exploring the advantages of exploiting syntactic information for TTS can be seen in BIBREF10, BIBREF11. While those studies, without any explicit acoustic pairing to the linguistic information, inject a number of curated features concatenated to the phonetic sequence as a way of informing the TTS system, the present study makes use of the linguistic information to drive the acoustic embedding selection rather than using it as an additional model features.
An exploration of how to use linguistics as a way of predicting adequate acoustic embeddings can be seen in BIBREF12, where the authors explore the path of predicting an adequate embedding by informing the system with a set of linguistic and semantic information. The main difference of the present work is that in our case, rather than predicting a point in a high-dimensional space by making use of sparse input information (which is a challenging task and potentially vulnerable to training-domain dependencies), we use the linguistic information to predict the most similar embedding in our training set, reducing the complexity of the task significantly.
The main contributions of this work are: i) we propose a novel approach of embedding selection in the acoustic space by using linguistic features; ii) we demonstrate that including syntactic information-driven acoustic embedding selection improves the overall speech quality, including its prosody; iii) we compare the improvements achieved by exploiting syntactic information in contrast with those brought by CWE; iv) we demonstrate that the approach improves the TTS quality in LFR experience as well.
<<</Introduction>>>
<<<Proposed Systems>>>
CWE seem the obvious choice to drive embedding selection as they contain both syntactic and semantic information. However, a possible drawback of relying on CWE is that the linguistic-acoustic mapping space is sparse. The generalization capability of such systems in unseen scenarios will be poor BIBREF13. Also, as CWE models lexical semantics, it implies that two semantically similar sentences are likely to have similar CWE representations. This however does not necessarily correspond to a similarity in prosody, as the structure of the two sentences can be very different.
We hypothesize that, in some scenarios, syntax will have better capability to generalize than semantics and that CWE have not been optimally exploited for driving prosody in speech synthesis. We explore these two hypotheses in our experiments. The objective of this work is to exploit sentence-level prosody variations available in the training dataset while synthesizing speech for the test sentence. The steps executed in this proposed approach are: (i) Generate suitable vector representations containing linguistic information for all the sentences in the train and test sets, (ii) Measure the similarity of the test sentence with each of the sentences in the train set. We do so by using cosine similarity between the vector representations as done in BIBREF14 to evaluate linguistic similarity, (iii) Choose the acoustic embedding of the train sentence which gives the highest similarity with the test sentence, (iv) Synthesize speech from VAE-based inference using this acoustic embedding
<<<Systems>>>
We experiment with three different systems for generating vector representations of the sentences, which allow us to explore the impact of both syntax and semantics on the overall quality of speech synthesis. The representations from the first system use syntactic information only, the second relies solely on CWE while the third uses a combination of CWE and explicit syntactic information.
<<<Syntactic>>>
Syntactic representations for sentences like constituency parse trees need to be transformed into vectors in order to be usable in neural TTS models. Some dimensions describing the tree can be transformed into word-based categorical feature like identity of parent and position of word in a phrase BIBREF15.
The syntactic distance between adjacent words is known to be a prosodically relevant numerical source of information which is easily extracted from the constituency tree BIBREF16. It is explained by the fact that if many nodes must be traversed to find the first common ancestor, the syntactic distance between words is high. Large syntactic distances correlate with acoustically relevant events such as phrasing breaks or prosodic resets.
To compute syntactic distance vector representations for sentences, we use the algorithm mentioned in BIBREF17. That is, for a sentence of n tokens, there are n corresponding distances which are concatenated together to give a vector of length n. The distance between the start of sentence and first token is always 0.
We can see an example in Fig. 1: for the sentence “The brown fox is quick and it is jumping over the lazy dog", whose distance vector is d = [0 2 1 3 1 8 7 6 5 4 3 2 1]. The completion of the subject noun phrase (after `fox') triggers a prosodic reset, reflected in the distance of 3 between `fox' and `is'. There should also be a more emphasized reset at the end of the first clause, represented by the distance of 8 between `quick' and `and'.
<<</Syntactic>>>
<<<BERT>>>
To generate CWE we use BERT BIBREF18, as it is one of the best performing pre-trained models with state of the art results on a large number of NLP tasks. BERT has also shown to generate strong representations for both syntax and semantics. We use the word representations from the uncased base (12 layer) model without fine-tuning. The sentence level representations are achieved by averaging the second to last hidden layer for each token in the sentence. These embeddings are used to drive acoustic embedding selection.
<<</BERT>>>
<<<BERT Syntactic>>>
Even though BERT embeddings capture some aspects of syntactic information along with semantics, we decided to experiment with a system combining the information captured by both of the above mentioned systems. The information from syntactic distances and BERT embeddings cannot be combined at token level to give a single vector representation since both these systems use different tokenization algorithms. Tokenization in BERT is based on the wordpiece algorithm BIBREF19 as a way to eliminate the out-of-vocabulary issues. On the other hand, tokenization used to generate parse trees is based on morphological considerations rooted in linguistic theory. At inference time, we average the similarity scores obtained by comparing the BERT embeddings and the syntactic distance vectors.
<<</BERT Syntactic>>>
<<</Systems>>>
<<<Applications to LFR>>>
The approaches described in Section SECREF1 produce utterances with more varied prosody as compared to the long-term monotonicity of those obtained via centroid-based VAE inference. However, when considering multi-sentence texts, we have to be mindful of the issues that can be introduced by erratic transitions. We tackle this issue by minimizing the acoustic variation a sentence can have with respect to the previous one, while still minimizing the linguistic distance. We consider the Euclidean distance between the 2D Principal Component Analysis (PCA) projected acoustic embeddings as a measure of acoustic variation, as we observe that the projected space provides us with an acoustically relevant space in which distances can be easily obtained. Doing the same in the 64-dimensional VAE space did not perform as intended, likely because of the non-linear manifold representing our system, in which distances are not linear. As a result, certain sentence may be linguistically the closest match in terms of syntactic distance or CWE, but it will still not be selected if its acoustic embedding is far apart from that of the previous sentence.
We modify the similarity evaluation metric used for choosing the closest match from the train set by adding a weighted cost to account for acoustic variation. This approach focuses only on the sentence transitions within a paragraph rather than optimizing the entire acoustic embedding path. This is done as follows: (i) Define the weights for linguistic similarity and acoustic similarity. In this work, the two weights sum up to 1; (ii) The objective is to minimize the following loss considering the acoustic embedding chosen for the previous sentence in the paragraph:
Loss = LSW * (1-LS) + (1-LSW) * D,
where LSW = Linguistic Similarity Weight; LS = Linguistic Similarity between test and train sentence; D = Euclidean distance between the acoustic embedding of the train sentence and the acoustic embedding chosen for the previous sentence.
We fix D=0 for the first sentence of every paragraph. Thus, this approach is more suitable for cases when the first sentence is generally the carrier sentence, i.e. one which uses a structural template. This is particularly the case for news stories such as the ones considered in this research.
Distances observed between the chosen acoustic embeddings for a sample paragraph and the effect of varying weights are depicted in the matrices in Fig FIGREF7. They are symmetric matrices, where each row and column of the matrix represents the sentence at index i in a paragraph. Each cell represents the Euclidean distance between the acoustic embeddings chosen for sentences at index i,j. We can see that in (a) the sentence at index 4 stands out as the most acoustically dissimilar sentence from the rest of the sentences in the paragraph. We see that the overall acoustic distance between sentences in much higher in (a) than in (b). As we are particularly concerned with transitions from previous to current sentence, we focus on cells (i,i-1) for each row. In (a), sentences at index 4 and 5 particularly stand out as potential erratic transitions due to high values in cell (4,3) and (5,4). In (b) we observe that the distances have significantly reduced and thus sentence transitions are expected to be smooth.
As LSW decreases, the transitions become smoother. This is not `free': there is a trade-off, as increasing the transition smoothness decreases the linguistic similarity which also reduces the prosodic divergence. Fig. FIGREF10 shows the trade-off between the two, across the test set, when using syntactic distance to evaluate LS. Low linguistic distance (i.e. 1 - LS) and low acoustic distance are required.
The plot shows that there is a sharp decrease in acoustic distance between LSW of 1.0 and 0.9 but the reduction becomes slower from therein, while the changes in linguistic distance progress in a linear fashion. We informally evaluated the performance of the systems by reducing LSW from 1.0 till 0.7 with a step size of 0.05 in order to look for an optimal balance. At LSW=0.9, the first elbow on acoustic distance curve, there was a significant decrease in the perceived erraticness. As such, we chose those values for our LFR evaluations.
<<</Applications to LFR>>>
<<</Proposed Systems>>>
<<<Experimental Protocol>>>
The research questions we attempt to answer are:
Can linguistics-driven selection of acoustic waveform from the existing dataset lead to improved prosody and naturalness when synthesizing speech ?
How does syntactic selection compare with CWE selection?
Does this approach improve LFR experience as well?
To answer these questions, we used in our experiments the systems, data and subjective evaluations described below.
<<<Text-to-Speech System>>>
The evaluated TTS system is a Tacotron-like system BIBREF20 already verified for the newscaster domain. A schematic description can be seen in Fig. FIGREF15 and a detailed explanation of the baseline system and the training data can be read in BIBREF21, BIBREF22. Conversion of the produced spectrograms to waveforms is done using the Universal WaveRNN-like model presented in BIBREF2.
For this study, we consider an improved system that replaced the one-hot vector style modeling approach by a VAE-based reference encoder similar to BIBREF5, BIBREF3, in which the VAE embedding represents an acoustic encoding of a speech signal, allowing us to drive the prosodic representation of the synthesized text as observed in BIBREF23. The way of selecting the embedding at inference time is defined by the approaches introduced in Sections SECREF1 and SECREF6. The dimension of the embedding is set to 64 as it allows for the best convergence without collapsing the KLD loss during training.
<<</Text-to-Speech System>>>
<<<Datasets>>>
<<<Training Dataset>>>
(i) TTS System dataset: We trained our TTS system with a mixture of neutral and newscaster style speech. For a total of 24 hours of training data, split in 20 hours of neutral (22000 utterances) and 4 hours of newscaster styled speech (3000 utterances).
(ii) Embedding selection dataset: As the evaluation was carried out only on the newscaster speaking style, we restrict our linguistic search space to the utterances associated to the newscaster style: 3000 sentences.
<<</Training Dataset>>>
<<<Evaluation Dataset>>>
The systems were evaluated on two datasets:
(i) Common Prosody Errors (CPE): The dataset on which the baseline Prostron model fails to generate appropriate prosody. This dataset consists of complex utterances like compound nouns (22%), “or" questions (9%), “wh" questions (18%). This set is further enhanced by sourcing complex utterances (51%) from BIBREF24.
(ii) LFR: As demonstrated in BIBREF25, evaluating sentences in isolation does not suffice if we want to evaluate the quality of long-form speech. Thus, for evaluations on LFR we curated a dataset of news samples. The news style sentences were concatenated into full news stories, to capture the overall experience of our intended use case.
<<</Evaluation Dataset>>>
<<</Datasets>>>
<<<Subjective evaluation>>>
Our tests are based on MUltiple Stimuli with Hidden Reference and Anchor (MUSHRA) BIBREF26, but without forcing a system to be rated as 100, and not always considering a top anchor. All of our listeners, regardless of linguistic knowledge were native US English speakers. For the CPE dataset, we carried out two tests. The first one with 10 linguistic experts as listeners, who were asked to rate the appropriateness of the prosody ignoring the speaking style on a scale from 0 (very inappropriate) to 100 (very appropriate). The second test was carried out on 10 crowd-sourced listeners who evaluated the naturalness of the speech from 0 to 100. In both tests each listener was asked to rate 28 different screens, with 4 randomly ordered samples per screen for a total of 112 samples. The 4 systems were the 3 proposed ones and the centroid-based VAE inference as the baseline.
For the LFR dataset, we conducted only a crowd-sourced evaluation of naturalness, where the listeners were asked to assess the suitability of newscaster style on a scale from 0 (completely unsuitable) to 100 (completely adequate). Each listener was presented with 51 news stories, each playing one of the 5 systems including the original recordings as a top anchor, the centroid-based VAE as baseline and the 3 proposed linguistics-driven embedding selection systems.
<<</Subjective evaluation>>>
<<</Experimental Protocol>>>
<<<Results>>>
Table 1 reports the average MUSHRA scores, evaluating prosody and naturalness, for each of the test systems on the CPE dataset. These results answer Q1, as the proposed approach improves significantly over the baseline on both grounds. It thus, gives us evidence supporting our hypothesis that linguistics-driven acoustic embedding selection can significantly improve speech quality. We also observe that better prosody does not directly translate into improved naturalness and that there is a need to improve acoustic modeling in order to better reflect the prosodic improvements achieved.
We validate the differences between MUSHRA scores using pairwise t-test. All proposed systems improved significantly over the baseline prosody (p$<$0.01). For naturalness, BERT syntactic performed the best, improving over the baseline significantly (p=0.04). Other systems did not give statistically significant improvement over the baseline (p$>$0.05). The difference between BERT and BERT Syntactic is also statistically insignificant.
Q2 is explored in Table TABREF21, which gives the breakdown of prosody results by major categories in CPE. For `wh' questions, we observe that Syntactic alone brings an improvement of 4% and BERT Syntactic performs the best by improving 8% over the baseline. This suggests that `wh' questions generally share a closely related syntax structure and that information can be used to achieve better prosody. This intuition is further strengthened by the improvements observed for `or' questions. Syntactic alone improves by 9% over the baseline and BERT Syntactic performs the best by improving 21% over the baseline. The improvement observed in `or' questions is greater than `wh' questions as most `or' questions have a syntax structure unique to them and this is consistent across samples in the category. For both these categories, the systems Syntactic, BERT and BERT Syntactic show incremental improvement as the first system contains only syntactic information, the next captures some aspect of syntax with semantics and the third has enhanced the representation of syntax with CWE representation to drive selection. Thus, it is evident that the extent of syntactic information captured drives the quality in speech synthesis for these two categories.
Compound nouns proved harder to improve upon as compared to questions. BERT performed the best in this category with a 1.2% improvement over the baseline. We can attribute this to the capability of BERT to capture context which Syntactic does not do. This plays a critical role in compound nouns, where to achieve suitable prosody it is imperative to understand in which context the nouns are being used. For other complex sentences as well, BERT performed the best by improving over the baseline by 6%. This can again be attributed to the fact that most of the complex sentences required contextual knowledge. Although Syntactic does improve over the baseline, syntax does not look like the driving factor as BERT Syntactic performs a bit worse than BERT. This indicates that enhancing syntax representation hinders BERT from fully leveraging the contextual knowledge it captured to drive embedding selection.
Q3 is answered in Table TABREF22, which reports the MUSHRA scores on the LFR dataset. The Syntactic system performed the best with high statistical significance (p=0.02) in comparison to baseline. We close the gap between the baseline and the recordings by almost 20%. Other systems show statistically insignificant (p$>$0.05) improvements over the baseline. To achieve suitable prosody, LFR requires longer distance dependencies and knowledge of prosodic groups. Such information can be approximated more effectively by the Syntactic system rather than the CWE based systems. However, this is a topic for a potential future exploration as the difference between BERT and Syntactic is statistically insignificant (p=0.6).
<<</Results>>>
<<<Conclusion>>>
The current VAE-based TTS systems are susceptible to monotonous speech generation due to the need to select a suitable acoustic embedding to synthesize a sample. In this work, we proposed to generate dynamic prosody from the same TTS systems by using linguistics to drive acoustic embedding selection. Our proposed approach is able to improve the overall speech quality including prosody and naturalness. We propose 3 techniques (Syntactic, BERT and BERT Syntactic) and evaluated their performance on 2 datasets: common prosodic errors and LFR. The Syntactic system was able to improve significantly over the baseline on almost all parameters (except for naturalness on CPE). Information captured by BERT further improved prosody in cases where contextual knowledge was required. For LFR, we bridged the gap between baseline and actual recording by 20%. This approach can be further extended by making the model aware of these features rather than using them to drive embedding selection.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Conclusion, Proposed Systems"
],
"type": "disordered_section"
}
|
1912.00955
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Dynamic Prosody Generation for Speech Synthesis using Linguistics-Driven Acoustic Embedding Selection
<<<Abstract>>>
Recent advances in Text-to-Speech (TTS) have improved quality and naturalness to near-human capabilities when considering isolated sentences. But something which is still lacking in order to achieve human-like communication is the dynamic variations and adaptability of human speech. This work attempts to solve the problem of achieving a more dynamic and natural intonation in TTS systems, particularly for stylistic speech such as the newscaster speaking style. We propose a novel embedding selection approach which exploits linguistic information, leveraging the speech variability present in the training dataset. We analyze the contribution of both semantic and syntactic features. Our results show that the approach improves the prosody and naturalness for complex utterances as well as in Long Form Reading (LFR).
<<</Abstract>>>
<<<Introduction>>>
Corresponding author email: [email protected]. Paper submitted to IEEE ICASSP 2020
Recent advances in TTS have improved the achievable synthetic speech naturalness to near human-like capabilities BIBREF0, BIBREF1, BIBREF2, BIBREF3. This means that for simple sentences, or for situations in which we can correctly predict the most appropriate prosodic representation, TTS systems are providing us with speech practically indistinguishable from that of humans.
One aspect that most systems are still lacking is the natural variability of human speech, which is being observed as one of the reasons why the cognitive load of synthetic speech is higher than that of humans BIBREF4. This is something that variational models such as those based on Variational Auto-Encoding (VAE) BIBREF3, BIBREF5 attempt to solve by exploiting the sampling capabilities of the acoustic embedding space at inference time.
Despite the advantages that VAE-based inference brings, it also suffers from the limitation that to synthesize a sample, one has to select an appropriate acoustic embedding for it, which can be challenging. A possible solution to this is to remove the selection process and consistently use a centroid to represent speech. This provides reliable acoustic representations but it suffers again from the monotonicity problem of conventional TTS. Another approach is to simply do a random sampling of the acoustic space. This would certainly solve the monotonicity problem if the acoustic embedding were varied enough. It can however, introduce erratic prosodic representations of longer texts, which can prove to be worse than being monotonous. Finally, one can consider text-based selection or prediction, as done in this research.
In this work, we present a novel approach for informed embedding selection using linguistic features. The tight relationship between syntactic constituent structure and prosody is well known BIBREF6, BIBREF7. In the traditional Natural Language Processing (NLP) pipeline, constituency parsing produces full syntactic trees. More recent approaches based on Contextual Word Embedding (CWE) suggest that CWE are largely able to implicitly represent the classic NLP pipeline BIBREF8, while still retaining the ability to model lexical semantics BIBREF9. Thus, in this work we explore how TTS systems can enhance the quality of speech synthesis by using such linguistic features to guide the prosodic contour of generated speech.
Similar relevant recent work exploring the advantages of exploiting syntactic information for TTS can be seen in BIBREF10, BIBREF11. While those studies, without any explicit acoustic pairing to the linguistic information, inject a number of curated features concatenated to the phonetic sequence as a way of informing the TTS system, the present study makes use of the linguistic information to drive the acoustic embedding selection rather than using it as an additional model features.
An exploration of how to use linguistics as a way of predicting adequate acoustic embeddings can be seen in BIBREF12, where the authors explore the path of predicting an adequate embedding by informing the system with a set of linguistic and semantic information. The main difference of the present work is that in our case, rather than predicting a point in a high-dimensional space by making use of sparse input information (which is a challenging task and potentially vulnerable to training-domain dependencies), we use the linguistic information to predict the most similar embedding in our training set, reducing the complexity of the task significantly.
The main contributions of this work are: i) we propose a novel approach of embedding selection in the acoustic space by using linguistic features; ii) we demonstrate that including syntactic information-driven acoustic embedding selection improves the overall speech quality, including its prosody; iii) we compare the improvements achieved by exploiting syntactic information in contrast with those brought by CWE; iv) we demonstrate that the approach improves the TTS quality in LFR experience as well.
<<</Introduction>>>
<<<Proposed Systems>>>
CWE seem the obvious choice to drive embedding selection as they contain both syntactic and semantic information. However, a possible drawback of relying on CWE is that the linguistic-acoustic mapping space is sparse. The generalization capability of such systems in unseen scenarios will be poor BIBREF13. Also, as CWE models lexical semantics, it implies that two semantically similar sentences are likely to have similar CWE representations. This however does not necessarily correspond to a similarity in prosody, as the structure of the two sentences can be very different.
We hypothesize that, in some scenarios, syntax will have better capability to generalize than semantics and that CWE have not been optimally exploited for driving prosody in speech synthesis. We explore these two hypotheses in our experiments. The objective of this work is to exploit sentence-level prosody variations available in the training dataset while synthesizing speech for the test sentence. The steps executed in this proposed approach are: (i) Generate suitable vector representations containing linguistic information for all the sentences in the train and test sets, (ii) Measure the similarity of the test sentence with each of the sentences in the train set. We do so by using cosine similarity between the vector representations as done in BIBREF14 to evaluate linguistic similarity, (iii) Choose the acoustic embedding of the train sentence which gives the highest similarity with the test sentence, (iv) Synthesize speech from VAE-based inference using this acoustic embedding
<<<Systems>>>
We experiment with three different systems for generating vector representations of the sentences, which allow us to explore the impact of both syntax and semantics on the overall quality of speech synthesis. The representations from the first system use syntactic information only, the second relies solely on CWE while the third uses a combination of CWE and explicit syntactic information.
<<<Syntactic>>>
Syntactic representations for sentences like constituency parse trees need to be transformed into vectors in order to be usable in neural TTS models. Some dimensions describing the tree can be transformed into word-based categorical feature like identity of parent and position of word in a phrase BIBREF15.
The syntactic distance between adjacent words is known to be a prosodically relevant numerical source of information which is easily extracted from the constituency tree BIBREF16. It is explained by the fact that if many nodes must be traversed to find the first common ancestor, the syntactic distance between words is high. Large syntactic distances correlate with acoustically relevant events such as phrasing breaks or prosodic resets.
To compute syntactic distance vector representations for sentences, we use the algorithm mentioned in BIBREF17. That is, for a sentence of n tokens, there are n corresponding distances which are concatenated together to give a vector of length n. The distance between the start of sentence and first token is always 0.
We can see an example in Fig. 1: for the sentence “The brown fox is quick and it is jumping over the lazy dog", whose distance vector is d = [0 2 1 3 1 8 7 6 5 4 3 2 1]. The completion of the subject noun phrase (after `fox') triggers a prosodic reset, reflected in the distance of 3 between `fox' and `is'. There should also be a more emphasized reset at the end of the first clause, represented by the distance of 8 between `quick' and `and'.
<<</Syntactic>>>
<<<BERT>>>
To generate CWE we use BERT BIBREF18, as it is one of the best performing pre-trained models with state of the art results on a large number of NLP tasks. BERT has also shown to generate strong representations for both syntax and semantics. We use the word representations from the uncased base (12 layer) model without fine-tuning. The sentence level representations are achieved by averaging the second to last hidden layer for each token in the sentence. These embeddings are used to drive acoustic embedding selection.
<<</BERT>>>
<<<BERT Syntactic>>>
Even though BERT embeddings capture some aspects of syntactic information along with semantics, we decided to experiment with a system combining the information captured by both of the above mentioned systems. The information from syntactic distances and BERT embeddings cannot be combined at token level to give a single vector representation since both these systems use different tokenization algorithms. Tokenization in BERT is based on the wordpiece algorithm BIBREF19 as a way to eliminate the out-of-vocabulary issues. On the other hand, tokenization used to generate parse trees is based on morphological considerations rooted in linguistic theory. At inference time, we average the similarity scores obtained by comparing the BERT embeddings and the syntactic distance vectors.
<<</BERT Syntactic>>>
<<</Systems>>>
<<<Applications to LFR>>>
The approaches described in Section SECREF1 produce utterances with more varied prosody as compared to the long-term monotonicity of those obtained via centroid-based VAE inference. However, when considering multi-sentence texts, we have to be mindful of the issues that can be introduced by erratic transitions. We tackle this issue by minimizing the acoustic variation a sentence can have with respect to the previous one, while still minimizing the linguistic distance. We consider the Euclidean distance between the 2D Principal Component Analysis (PCA) projected acoustic embeddings as a measure of acoustic variation, as we observe that the projected space provides us with an acoustically relevant space in which distances can be easily obtained. Doing the same in the 64-dimensional VAE space did not perform as intended, likely because of the non-linear manifold representing our system, in which distances are not linear. As a result, certain sentence may be linguistically the closest match in terms of syntactic distance or CWE, but it will still not be selected if its acoustic embedding is far apart from that of the previous sentence.
We modify the similarity evaluation metric used for choosing the closest match from the train set by adding a weighted cost to account for acoustic variation. This approach focuses only on the sentence transitions within a paragraph rather than optimizing the entire acoustic embedding path. This is done as follows: (i) Define the weights for linguistic similarity and acoustic similarity. In this work, the two weights sum up to 1; (ii) The objective is to minimize the following loss considering the acoustic embedding chosen for the previous sentence in the paragraph:
Loss = LSW * (1-LS) + (1-LSW) * D,
where LSW = Linguistic Similarity Weight; LS = Linguistic Similarity between test and train sentence; D = Euclidean distance between the acoustic embedding of the train sentence and the acoustic embedding chosen for the previous sentence.
We fix D=0 for the first sentence of every paragraph. Thus, this approach is more suitable for cases when the first sentence is generally the carrier sentence, i.e. one which uses a structural template. This is particularly the case for news stories such as the ones considered in this research.
Distances observed between the chosen acoustic embeddings for a sample paragraph and the effect of varying weights are depicted in the matrices in Fig FIGREF7. They are symmetric matrices, where each row and column of the matrix represents the sentence at index i in a paragraph. Each cell represents the Euclidean distance between the acoustic embeddings chosen for sentences at index i,j. We can see that in (a) the sentence at index 4 stands out as the most acoustically dissimilar sentence from the rest of the sentences in the paragraph. We see that the overall acoustic distance between sentences in much higher in (a) than in (b). As we are particularly concerned with transitions from previous to current sentence, we focus on cells (i,i-1) for each row. In (a), sentences at index 4 and 5 particularly stand out as potential erratic transitions due to high values in cell (4,3) and (5,4). In (b) we observe that the distances have significantly reduced and thus sentence transitions are expected to be smooth.
As LSW decreases, the transitions become smoother. This is not `free': there is a trade-off, as increasing the transition smoothness decreases the linguistic similarity which also reduces the prosodic divergence. Fig. FIGREF10 shows the trade-off between the two, across the test set, when using syntactic distance to evaluate LS. Low linguistic distance (i.e. 1 - LS) and low acoustic distance are required.
The plot shows that there is a sharp decrease in acoustic distance between LSW of 1.0 and 0.9 but the reduction becomes slower from therein, while the changes in linguistic distance progress in a linear fashion. We informally evaluated the performance of the systems by reducing LSW from 1.0 till 0.7 with a step size of 0.05 in order to look for an optimal balance. At LSW=0.9, the first elbow on acoustic distance curve, there was a significant decrease in the perceived erraticness. As such, we chose those values for our LFR evaluations.
<<</Applications to LFR>>>
<<</Proposed Systems>>>
<<<Experimental Protocol>>>
The research questions we attempt to answer are:
Can linguistics-driven selection of acoustic waveform from the existing dataset lead to improved prosody and naturalness when synthesizing speech ?
How does syntactic selection compare with CWE selection?
Does this approach improve LFR experience as well?
To answer these questions, we used in our experiments the systems, data and subjective evaluations described below.
<<<Text-to-Speech System>>>
The evaluated TTS system is a Tacotron-like system BIBREF20 already verified for the newscaster domain. A schematic description can be seen in Fig. FIGREF15 and a detailed explanation of the baseline system and the training data can be read in BIBREF21, BIBREF22. Conversion of the produced spectrograms to waveforms is done using the Universal WaveRNN-like model presented in BIBREF2.
For this study, we consider an improved system that replaced the one-hot vector style modeling approach by a VAE-based reference encoder similar to BIBREF5, BIBREF3, in which the VAE embedding represents an acoustic encoding of a speech signal, allowing us to drive the prosodic representation of the synthesized text as observed in BIBREF23. The way of selecting the embedding at inference time is defined by the approaches introduced in Sections SECREF1 and SECREF6. The dimension of the embedding is set to 64 as it allows for the best convergence without collapsing the KLD loss during training.
<<</Text-to-Speech System>>>
<<<Datasets>>>
<<<Training Dataset>>>
(i) TTS System dataset: We trained our TTS system with a mixture of neutral and newscaster style speech. For a total of 24 hours of training data, split in 20 hours of neutral (22000 utterances) and 4 hours of newscaster styled speech (3000 utterances).
(ii) Embedding selection dataset: As the evaluation was carried out only on the newscaster speaking style, we restrict our linguistic search space to the utterances associated to the newscaster style: 3000 sentences.
<<</Training Dataset>>>
<<<Evaluation Dataset>>>
The systems were evaluated on two datasets:
(i) Common Prosody Errors (CPE): The dataset on which the baseline Prostron model fails to generate appropriate prosody. This dataset consists of complex utterances like compound nouns (22%), “or" questions (9%), “wh" questions (18%). This set is further enhanced by sourcing complex utterances (51%) from BIBREF24.
(ii) LFR: As demonstrated in BIBREF25, evaluating sentences in isolation does not suffice if we want to evaluate the quality of long-form speech. Thus, for evaluations on LFR we curated a dataset of news samples. The news style sentences were concatenated into full news stories, to capture the overall experience of our intended use case.
<<</Evaluation Dataset>>>
<<</Datasets>>>
<<<Subjective evaluation>>>
Our tests are based on MUltiple Stimuli with Hidden Reference and Anchor (MUSHRA) BIBREF26, but without forcing a system to be rated as 100, and not always considering a top anchor. All of our listeners, regardless of linguistic knowledge were native US English speakers. For the CPE dataset, we carried out two tests. The first one with 10 linguistic experts as listeners, who were asked to rate the appropriateness of the prosody ignoring the speaking style on a scale from 0 (very inappropriate) to 100 (very appropriate). The second test was carried out on 10 crowd-sourced listeners who evaluated the naturalness of the speech from 0 to 100. In both tests each listener was asked to rate 28 different screens, with 4 randomly ordered samples per screen for a total of 112 samples. The 4 systems were the 3 proposed ones and the centroid-based VAE inference as the baseline.
For the LFR dataset, we conducted only a crowd-sourced evaluation of naturalness, where the listeners were asked to assess the suitability of newscaster style on a scale from 0 (completely unsuitable) to 100 (completely adequate). Each listener was presented with 51 news stories, each playing one of the 5 systems including the original recordings as a top anchor, the centroid-based VAE as baseline and the 3 proposed linguistics-driven embedding selection systems.
<<</Subjective evaluation>>>
<<</Experimental Protocol>>>
<<<Results>>>
Table 1 reports the average MUSHRA scores, evaluating prosody and naturalness, for each of the test systems on the CPE dataset. These results answer Q1, as the proposed approach improves significantly over the baseline on both grounds. It thus, gives us evidence supporting our hypothesis that linguistics-driven acoustic embedding selection can significantly improve speech quality. We also observe that better prosody does not directly translate into improved naturalness and that there is a need to improve acoustic modeling in order to better reflect the prosodic improvements achieved.
We validate the differences between MUSHRA scores using pairwise t-test. All proposed systems improved significantly over the baseline prosody (p$<$0.01). For naturalness, BERT syntactic performed the best, improving over the baseline significantly (p=0.04). Other systems did not give statistically significant improvement over the baseline (p$>$0.05). The difference between BERT and BERT Syntactic is also statistically insignificant.
Q2 is explored in Table TABREF21, which gives the breakdown of prosody results by major categories in CPE. For `wh' questions, we observe that Syntactic alone brings an improvement of 4% and BERT Syntactic performs the best by improving 8% over the baseline. This suggests that `wh' questions generally share a closely related syntax structure and that information can be used to achieve better prosody. This intuition is further strengthened by the improvements observed for `or' questions. Syntactic alone improves by 9% over the baseline and BERT Syntactic performs the best by improving 21% over the baseline. The improvement observed in `or' questions is greater than `wh' questions as most `or' questions have a syntax structure unique to them and this is consistent across samples in the category. For both these categories, the systems Syntactic, BERT and BERT Syntactic show incremental improvement as the first system contains only syntactic information, the next captures some aspect of syntax with semantics and the third has enhanced the representation of syntax with CWE representation to drive selection. Thus, it is evident that the extent of syntactic information captured drives the quality in speech synthesis for these two categories.
Compound nouns proved harder to improve upon as compared to questions. BERT performed the best in this category with a 1.2% improvement over the baseline. We can attribute this to the capability of BERT to capture context which Syntactic does not do. This plays a critical role in compound nouns, where to achieve suitable prosody it is imperative to understand in which context the nouns are being used. For other complex sentences as well, BERT performed the best by improving over the baseline by 6%. This can again be attributed to the fact that most of the complex sentences required contextual knowledge. Although Syntactic does improve over the baseline, syntax does not look like the driving factor as BERT Syntactic performs a bit worse than BERT. This indicates that enhancing syntax representation hinders BERT from fully leveraging the contextual knowledge it captured to drive embedding selection.
Q3 is answered in Table TABREF22, which reports the MUSHRA scores on the LFR dataset. The Syntactic system performed the best with high statistical significance (p=0.02) in comparison to baseline. We close the gap between the baseline and the recordings by almost 20%. Other systems show statistically insignificant (p$>$0.05) improvements over the baseline. To achieve suitable prosody, LFR requires longer distance dependencies and knowledge of prosodic groups. Such information can be approximated more effectively by the Syntactic system rather than the CWE based systems. However, this is a topic for a potential future exploration as the difference between BERT and Syntactic is statistically insignificant (p=0.6).
<<</Results>>>
<<<Conclusion>>>
The current VAE-based TTS systems are susceptible to monotonous speech generation due to the need to select a suitable acoustic embedding to synthesize a sample. In this work, we proposed to generate dynamic prosody from the same TTS systems by using linguistics to drive acoustic embedding selection. Our proposed approach is able to improve the overall speech quality including prosody and naturalness. We propose 3 techniques (Syntactic, BERT and BERT Syntactic) and evaluated their performance on 2 datasets: common prosodic errors and LFR. The Syntactic system was able to improve significantly over the baseline on almost all parameters (except for naturalness on CPE). Information captured by BERT further improved prosody in cases where contextual knowledge was required. For LFR, we bridged the gap between baseline and actual recording by 20%. This approach can be further extended by making the model aware of these features rather than using them to drive embedding selection.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Abstract, Conclusion"
],
"type": "disordered_section"
}
|
1909.08752
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Summary Level Training of Sentence Rewriting for Abstractive Summarization
<<<Abstract>>>
As an attempt to combine extractive and abstractive summarization, Sentence Rewriting models adopt the strategy of extracting salient sentences from a document first and then paraphrasing the selected ones to generate a summary. However, the existing models in this framework mostly rely on sentence-level rewards or suboptimal labels, causing a mismatch between a training objective and evaluation metric. In this paper, we present a novel training signal that directly maximizes summary-level ROUGE scores through reinforcement learning. In addition, we incorporate BERT into our model, making good use of its ability on natural language understanding. In extensive experiments, we show that a combination of our proposed model and training procedure obtains new state-of-the-art performance on both CNN/Daily Mail and New York Times datasets. We also demonstrate that it generalizes better on DUC-2002 test set.
<<</Abstract>>>
<<<Introduction>>>
The task of automatic text summarization aims to compress a textual document to a shorter highlight while keeping salient information of the original text. In general, there are two ways to do text summarization: Extractive and Abstractive BIBREF0. Extractive approaches generate summaries by selecting salient sentences or phrases from a source text, while abstractive approaches involve a process of paraphrasing or generating sentences to write a summary.
Recent work BIBREF1, BIBREF2 demonstrates that it is highly beneficial for extractive summarization models to incorporate pre-trained language models (LMs) such as BERT BIBREF3 into their architectures. However, the performance improvement from the pre-trained LMs is known to be relatively small in case of abstractive summarization BIBREF4, BIBREF5. This discrepancy may be due to the difference between extractive and abstractive approaches in ways of dealing with the task—the former classifies whether each sentence to be included in a summary, while the latter generates a whole summary from scratch. In other words, as most of the pre-trained LMs are designed to be of help to the tasks which can be categorized as classification including extractive summarization, they are not guaranteed to be advantageous to abstractive summarization models that should be capable of generating language BIBREF6, BIBREF7.
On the other hand, recent studies for abstractive summarization BIBREF8, BIBREF9, BIBREF10 have attempted to exploit extractive models. Among these, a notable one is BIBREF8, in which a sophisticated model called Reinforce-Selected Sentence Rewriting is proposed. The model consists of both an extractor and abstractor, where the extractor picks out salient sentences first from a source article, and then the abstractor rewrites and compresses the extracted sentences into a complete summary. It is further fine-tuned by training the extractor with the rewards derived from sentence-level ROUGE scores of the summary generated from the abstractor.
In this paper, we improve the model of BIBREF8, addressing two primary issues. Firstly, we argue there is a bottleneck in the existing extractor on the basis of the observation that its performance as an independent summarization model (i.e., without the abstractor) is no better than solid baselines such as selecting the first 3 sentences. To resolve the problem, we present a novel neural extractor exploiting the pre-trained LMs (BERT in this work) which are expected to perform better according to the recent studies BIBREF1, BIBREF2. Since the extractor is a sort of sentence classifier, we expect that it can make good use of the ability of pre-trained LMs which is proven to be effective in classification.
Secondly, the other point is that there is a mismatch between the training objective and evaluation metric; the previous work utilizes the sentence-level ROUGE scores as a reinforcement learning objective, while the final performance of a summarization model is evaluated by the summary-level ROUGE scores. Moreover, as BIBREF11 pointed out, sentences with the highest individual ROUGE scores do not necessarily lead to an optimal summary, since they may contain overlapping contents, causing verbose and redundant summaries. Therefore, we propose to directly use the summary-level ROUGE scores as an objective instead of the sentence-level scores. A potential problem arising from this apprsoach is the sparsity of training signals, because the summary-level ROUGE scores are calculated only once for each training episode. To alleviate this problem, we use reward shaping BIBREF12 to give an intermediate signal for each action, preserving the optimal policy.
We empirically demonstrate the superiority of our approach by achieving new state-of-the-art abstractive summarization results on CNN/Daily Mail and New York Times datasets BIBREF13, BIBREF14. It is worth noting that our approach shows large improvements especially on ROUGE-L score which is considered a means of assessing fluency BIBREF11. In addition, our model performs much better than previous work when testing on DUC-2002 dataset, showing better generalization and robustness of our model.
Our contributions in this work are three-fold: a novel successful application of pre-trained transformers for abstractive summarization; suggesting a training method to globally optimize sentence selection; achieving the state-of-the-art results on the benchmark datasets, CNN/Daily Mail and New York Times.
<<</Introduction>>>
<<<Background>>>
<<<Sentence Rewriting>>>
In this paper, we focus on single-document multi-sentence summarization and propose a neural abstractive model based on the Sentence Rewriting framework BIBREF8, BIBREF15 which consists of two parts: a neural network for the extractor and another network for the abstractor. The extractor network is designed to extract salient sentences from a source article. The abstractor network rewrites the extracted sentences into a short summary.
<<</Sentence Rewriting>>>
<<<Learning Sentence Selection>>>
The most common way to train extractor to select informative sentences is building extractive oracles as gold targets, and training with cross-entropy (CE) loss. An oracle consists of a set of sentences with the highest possible ROUGE scores. Building oracles is finding an optimal combination of sentences, where there are $2^n$ possible combinations for each example. Because of this, the exact optimization for ROUGE scores is intractable. Therefore, alternative methods identify the set of sentences with greedy search BIBREF16, sentence-level search BIBREF9, BIBREF17 or collective search using the limited number of sentences BIBREF15, which construct suboptimal oracles. Even if all the optimal oracles are found, training with CE loss using these labels will cause underfitting as it will only maximize probabilities for sentences in label sets and ignore all other sentences.
Alternatively, reinforcement learning (RL) can give room for exploration in the search space. BIBREF8, our baseline work, proposed to apply policy gradient methods to train an extractor. This approach makes an end-to-end trainable stochastic computation graph, encouraging the model to select sentences with high ROUGE scores. However, they define a reward for an action (sentence selection) as a sentence-level ROUGE score between the chosen sentence and a sentence in the ground truth summary for that time step. This leads the extractor agent to a suboptimal policy; the set of sentences matching individually with each sentence in a ground truth summary isn't necessarily optimal in terms of summary-level ROUGE score.
BIBREF11 proposed policy gradient with rewards from summary-level ROUGE. They defined an action as sampling a summary from candidate summaries that contain the limited number of plausible sentences. After training, a sentence is ranked high for selection if it often occurs in high scoring summaries. However, their approach still has a risk of ranking redundant sentences high; if two highly overlapped sentences have salient information, they would be ranked high together, increasing the probability of being sampled in one summary.
To tackle this problem, we propose a training method using reinforcement learning which globally optimizes summary-level ROUGE score and gives intermediate rewards to ease the learning.
<<</Learning Sentence Selection>>>
<<<Pre-trained Transformers>>>
Transferring representations from pre-trained transformer language models has been highly successful in the domain of natural language understanding tasks BIBREF18, BIBREF3, BIBREF19, BIBREF20. These methods first pre-train highly stacked transformer blocks BIBREF21 on a huge unlabeled corpus, and then fine-tune the models or representations on downstream tasks.
<<</Pre-trained Transformers>>>
<<</Background>>>
<<<Model>>>
Our model consists of two neural network modules, i.e. an extractor and abstractor. The extractor encodes a source document and chooses sentences from the document, and then the abstractor paraphrases the summary candidates. Formally, a single document consists of $n$ sentences $D=\lbrace s_1,s_2,\cdots ,s_n\rbrace $. We denote $i$-th sentence as $s_i=\lbrace w_{i1},w_{i2},\cdots ,w_{im}\rbrace $ where $w_{ij}$ is the $j$-th word in $s_i$. The extractor learns to pick out a subset of $D$ denoted as $\hat{D}=\lbrace \hat{s}_1,\hat{s}_2,\cdots ,\hat{s}_k|\hat{s}_i\in D\rbrace $ where $k$ sentences are selected. The abstractor rewrites each of the selected sentences to form a summary $S=\lbrace f(\hat{s}_1),f(\hat{s}_2),\cdots ,f(\hat{s}_k)\rbrace $, where $f$ is an abstracting function. And a gold summary consists of $l$ sentences $A=\lbrace a_1,a_2,\cdots ,a_l\rbrace $.
<<<Extractor Network>>>
The extractor is based on the encoder-decoder framework. We adapt BERT for the encoder to exploit contextualized representations from pre-trained transformers. BERT as the encoder maps the input sequence $D$ to sentence representation vectors $H=\lbrace h_1,h_2,\cdots ,h_n\rbrace $, where $h_i$ is for the $i$-th sentence in the document. Then, the decoder utilizes $H$ to extract $\hat{D}$ from $D$.
<<<Leveraging Pre-trained Transformers>>>
Although we require the encoder to output the representation for each sentence, the output vectors from BERT are grounded to tokens instead of sentences. Therefore, we modify the input sequence and embeddings of BERT as BIBREF1 did.
In the original BERT's configure, a [CLS] token is used to get features from one sentence or a pair of sentences. Since we need a symbol for each sentence representation, we insert the [CLS] token before each sentence. And we add a [SEP] token at the end of each sentence, which is used to differentiate multiple sentences. As a result, the vector for the $i$-th [CLS] symbol from the top BERT layer corresponds to the $i$-th sentence representation $h_i$.
In addition, we add interval segment embeddings as input for BERT to distinguish multiple sentences within a document. For $s_i$ we assign a segment embedding $E_A$ or $E_B$ conditioned on $i$ is odd or even. For example, for a consecutive sequence of sentences $s_1, s_2, s_3, s_4, s_5$, we assign $E_A, E_B, E_A, E_B, E_A$ in order. All the words in each sentence are assigned to the same segment embedding, i.e. segment embeddings for $w_{11}, w_{12},\cdots ,w_{1m}$ is $E_A,E_A,\cdots ,E_A$. An illustration for this procedure is shown in Figure FIGREF1.
<<</Leveraging Pre-trained Transformers>>>
<<<Sentence Selection>>>
We use LSTM Pointer Network BIBREF22 as the decoder to select the extracted sentences based on the above sentence representations. The decoder extracts sentences recurrently, producing a distribution over all of the remaining sentence representations excluding those already selected. Since we use the sequential model which selects one sentence at a time step, our decoder can consider the previously selected sentences. This property is needed to avoid selecting sentences that have overlapping information with the sentences extracted already.
As the decoder structure is almost the same with the previous work, we convey the equations of BIBREF8 to avoid confusion, with minor modifications to agree with our notations. Formally, the extraction probability is calculated as:
where $e_t$ is the output of the glimpse operation:
In Equation DISPLAY_FORM9, $z_t$ is the hidden state of the LSTM decoder at time $t$ (shown in green in Figure FIGREF1). All the $W$ and $v$ are trainable parameters.
<<</Sentence Selection>>>
<<</Extractor Network>>>
<<<Abstractor Network>>>
The abstractor network approximates $f$, which compresses and paraphrases an extracted document sentence to a concise summary sentence. We use the standard attention based sequence-to-sequence (seq2seq) model BIBREF23, BIBREF24 with the copying mechanism BIBREF25 for handling out-of-vocabulary (OOV) words. Our abstractor is practically identical to the one proposed in BIBREF8.
<<</Abstractor Network>>>
<<</Model>>>
<<<Training>>>
In our model, an extractor selects a series of sentences, and then an abstractor paraphrases them. As they work in different ways, we need different training strategies suitable for each of them. Training the abstractor is relatively obvious; maximizing log-likelihood for the next word given the previous ground truth words. However, there are several issues for extractor training. First, the extractor should consider the abstractor's rewriting process when it selects sentences. This causes a weak supervision problem BIBREF26, since the extractor gets training signals indirectly after paraphrasing processes are finished. In addition, thus this procedure contains sampling or maximum selection, the extractor performs a non-differentiable extraction. Lastly, although our goal is maximizing ROUGE scores, neural models cannot be trained directly by maximum likelihood estimation from them.
To address those issues above, we apply standard policy gradient methods, and we propose a novel training procedure for extractor which guides to the optimal policy in terms of the summary-level ROUGE. As usual in RL for sequence prediction, we pre-train submodules and apply RL to fine-tune the extractor.
<<<Training Submodules>>>
<<<Extractor Pre-training>>>
Starting from a poor random policy makes it difficult to train the extractor agent to converge towards the optimal policy. Thus, we pre-train the network using cross entropy (CE) loss like previous work BIBREF27, BIBREF8. However, there is no gold label for extractive summarization in most of the summarization datasets. Hence, we employ a greedy approach BIBREF16 to make the extractive oracles, where we add one sentence at a time incrementally to the summary, such that the ROUGE score of the current set of selected sentences is maximized for the entire ground truth summary. This doesn't guarantee optimal, but it is enough to teach the network to select plausible sentences. Formally, the network is trained to minimize the cross-entropy loss as follows:
where $s^*_t$ is the $t$-th generated oracle sentence.
<<</Extractor Pre-training>>>
<<<Abstractor Training>>>
For the abstractor training, we should create training pairs for input and target sentences. As the abstractor paraphrases on sentence-level, we take a sentence-level search for each ground-truth summary sentence. We find the most similar document sentence $s^{\prime }_t$ by:
And then the abstractor is trained as a usual sequence-to-sequence model to minimize the cross-entropy loss:
where $w^a_j$ is the $j$-th word of the target sentence $a_t$, and $\Phi $ is the encoded representation for $s^{\prime }_t$.
<<</Abstractor Training>>>
<<</Training Submodules>>>
<<<Guiding to the Optimal Policy>>>
To optimize ROUGE metric directly, we assume the extractor as an agent in reinforcement learning paradigm BIBREF28. We view the extractor has a stochastic policy that generates actions (sentence selection) and receives the score of final evaluation metric (summary-level ROUGE in our case) as the return
While we are ultimately interested in the maximization of the score of a complete summary, simply awarding this score at the last step provides a very sparse training signal. For this reason we define intermediate rewards using reward shaping BIBREF12, which is inspired by BIBREF27's attempt for sequence prediction. Namely, we compute summary-level score values for all intermediate summaries:
The reward for each step $r_t$ is the difference between the consecutive pairs of scores:
This measures an amount of increase or decrease in the summary-level score from selecting $\hat{s}_t$. Using the shaped reward $r_t$ instead of awarding the whole score $R$ at the last step does not change the optimal policy BIBREF12. We define a discounted future reward for each step as $R_t=\sum _{t=1}^{k}\gamma ^tr_{t+1}$, where $\gamma $ is a discount factor.
Additionally, we add `stop' action to the action space, by concatenating trainable parameters $h_{\text{stop}}$ (the same dimension as $h_i$) to $H$. The agent treats it as another candidate to extract. When it selects `stop', an extracting episode ends and the final return is given. This encourages the model to extract additional sentences only when they are expected to increase the final return.
Following BIBREF8, we use the Advantage Actor Critic BIBREF29 method to train. We add a critic network to estimate a value function $V_t(D,\hat{s}_1,\cdots ,\hat{s}_{t-1})$, which then is used to compute advantage of each action (we will omit the current state $(D,\hat{s}_1,\cdots ,\hat{s}_{t-1})$ to simplify):
where $Q_t(s_i)$ is the expected future reward for selecting $s_i$ at the current step $t$. We maximize this advantage with the policy gradient with the Monte-Carlo sample ($A_t(s_i) \approx R_t - V_t$):
where $\theta _\pi $ is the trainable parameters of the actor network (original extractor). And the critic is trained to minimize the square loss:
where $\theta _\psi $ is the trainable parameters of the critic network.
<<</Guiding to the Optimal Policy>>>
<<</Training>>>
<<<Experimental Setup>>>
<<<Datasets>>>
We evaluate the proposed approach on the CNN/Daily Mail BIBREF13 and New York Times BIBREF30 dataset, which are both standard corpora for multi-sentence abstractive summarization. Additionally, we test generalization of our model on DUC-2002 test set.
CNN/Daily Mail dataset consists of more than 300K news articles and each of them is paired with several highlights. We used the standard splits of BIBREF13 for training, validation and testing (90,226/1,220/1,093 documents for CNN and 196,961/12,148/10,397 for Daily Mail). We did not anonymize entities. We followed the preprocessing methods in BIBREF25 after splitting sentences by Stanford CoreNLP BIBREF31.
The New York Times dataset also consists of many news articles. We followed the dataset splits of BIBREF14; 100,834 for training and 9,706 for test examples. And we also followed the filtering procedure of them, removing documents with summaries that are shorter than 50 words. The final test set (NYT50) contains 3,452 examples out of the original 9,706.
The DUC-2002 dataset contains 567 document-summary pairs for single-document summarization. As a single document can have multiple summaries, we made one pair per summary. We used this dataset as a test set for our model trained on CNN/Daily Mail dataset to test generalization.
<<</Datasets>>>
<<<Implementation Details>>>
Our extractor is built on $\text{BERT}_\text{BASE}$ with fine-tuning, smaller version than $\text{BERT}_\text{LARGE}$ due to limitation of time and space. We set LSTM hidden size as 256 for all of our models. To initialize word embeddings for our abstractor, we use word2vec BIBREF32 of 128 dimensions trained on the same corpus. We optimize our model with Adam optimizer BIBREF33 with $\beta _1=0.9$ and $\beta _2=0.999$. For extractor pre-training, we use learning rate schedule following BIBREF21 with $warmup=10000$:
And we set learning rate $1e^{-3}$ for abstractor and $4e^{-6}$ for RL training. We apply gradient clipping using L2 norm with threshold $2.0$. For RL training, we use $\gamma =0.95$ for the discount factor. To ease learning $h_{\text{stop}}$, we set the reward for the stop action to $\lambda \cdot \text{ROUGE-L}^{\text{summ}}_{F_1}(S, A)$, where $\lambda $ is a stop coefficient set to $0.08$. Our critic network shares the encoder with the actor (extractor) and has the same architecture with it except the output layer, estimating scalar for the state value. And the critic is initialized with the parameters of the pre-trained extractor where it has the same architecture.
<<</Implementation Details>>>
<<<Evaluation>>>
We evaluate the performance of our method using different variants of ROUGE metric computed with respect to the gold summaries. On the CNN/Daily Mail and DUC-2002 dataset, we use standard ROUGE-1, ROUGE-2, and ROUGE-L BIBREF34 on full length $F_1$ with stemming as previous work did BIBREF16, BIBREF25, BIBREF8. On NYT50 dataset, following BIBREF14 and BIBREF35, we used the limited length ROUGE recall metric, truncating the generated summary to the length of the ground truth summary.
<<</Evaluation>>>
<<</Experimental Setup>>>
<<<Results>>>
<<<CNN/Daily Mail>>>
Table TABREF24 shows the experimental results on CNN/Daily Mail dataset, with extractive models in the top block and abstractive models in the bottom block. For comparison, we list the performance of many recent approaches with ours.
<<<Extractive Summarization>>>
As BIBREF25 showed, the first 3 sentences (lead-3) in an article form a strong summarization baseline in CNN/Daily Mail dataset. Therefore, the very first objective of extractive models is to outperform the simple method which always returns 3 or 4 sentences at the top. However, as Table TABREF27 shows, ROUGE scores of lead baselines and extractors from previous work in Sentence Rewrite framework BIBREF8, BIBREF15 are almost tie. We can easily conjecture that the limited performances of their full model are due to their extractor networks. Our extractor network with BERT (BERT-ext), as a single model, outperforms those models with large margins. Adding reinforcement learning (BERT-ext + RL) gives higher performance, which is competitive with other extractive approaches using pre-trained Transformers (see Table TABREF24). This shows the effectiveness of our learning method.
<<</Extractive Summarization>>>
<<<Abstractive Summarization>>>
Our abstractive approaches combine the extractor with the abstractor. The combined model (BERT-ext + abs) without additional RL training outperforms the Sentence Rewrite model BIBREF8 without reranking, showing the effectiveness of our extractor network. With the proposed RL training procedure (BERT-ext + abs + RL), our model exceeds the best model of BIBREF8. In addition, the result is better than those of all the other abstractive methods exploiting extractive approaches in them BIBREF9, BIBREF8, BIBREF10.
<<</Abstractive Summarization>>>
<<<Redundancy Control>>>
Although the proposed RL training inherently gives training signals that induce the model to avoid redundancy across sentences, there can be still remaining overlaps between extracted sentences. We found that the additional methods reducing redundancies can improve the summarization quality, especially on CNN/Daily Mail dataset.
We tried Trigram Blocking BIBREF1 for extractor and Reranking BIBREF8 for abstractor, and we empirically found that the reranking only improves the performance. This helps the model to compress the extracted sentences focusing on disjoint information, even if there are some partial overlaps between the sentences. Our best abstractive model (BERT-ext + abs + RL + rerank) achieves the new state-of-the-art performance for abstractive summarization in terms of average ROUGE score, with large margins on ROUGE-L.
However, we empirically found that the reranking method has no effect or has negative effect on NYT50 or DUC-2002 dataset. Hence, we don't apply it for the remaining datasets.
<<</Redundancy Control>>>
<<<Combinatorial Reward>>>
Before seeing the effects of our summary-level rewards on final results, we check the upper bounds of different training signals for the full model. All the document sentences are paraphrased with our trained abstractor, and then we find the best set for each search method. Sentence-matching finds sentences with the highest ROUGE-L score for each sentence in the gold summary. This search method matches with the best reward from BIBREF8. Greedy Search is the same method explained for extractor pre-training in section SECREF11. Combination Search selects a set of sentences which has the highest summary-level ROUGE-L score, from all the possible combinations of sentences. Due to time constraints, we limited the maximum number of sentences to 5. This method corresponds to our final return in RL training.
Table TABREF31 shows the summary-level ROUGE scores of previously explained methods. We see considerable gaps between Sentence-matching and Greedy Search, while the scores of Greedy Search are close to those of Combination Search. Note that since we limited the number of sentences for Combination Search, the exact scores for it would be higher. The scores can be interpreted to be upper bounds for corresponding training methods. This result supports our training strategy; pre-training with Greedy Search and final optimization with the combinatorial return.
Additionally, we experiment to verify the contribution of our training method. We train the same model with different training signals; Sentence-level reward from BIBREF8 and combinatorial reward from ours. The results are shown in Table TABREF34. Both with and without reranking, the models trained with the combinatorial reward consistently outperform those trained with the sentence-level reward.
<<</Combinatorial Reward>>>
<<<Human Evaluation>>>
We also conduct human evaluation to ensure robustness of our training procedure. We measure relevance and readability of the summaries. Relevance is based on the summary containing important, salient information from the input article, being correct by avoiding contradictory/unrelated information, and avoiding repeated/redundant information. Readability is based on the summarys fluency, grammaticality, and coherence. To evaluate both these criteria, we design a Amazon Mechanical Turk experiment based on ranking method, inspired by BIBREF36. We randomly select 20 samples from the CNN/Daily Mail test set and ask the human testers (3 for each sample) to rank summaries (for relevance and readability) produced by 3 different models: our final model, that of BIBREF8 and that of BIBREF1. 2, 1 and 0 points were given according to the ranking. The models were anonymized and randomly shuffled. Following previous work, the input article and ground truth summaries are also shown to the human participants in addition to the three model summaries. From the results shown in Table TABREF36, we can see that our model is better in relevance compared to others. In terms of readability, there was no noticeable difference.
<<</Human Evaluation>>>
<<</CNN/Daily Mail>>>
<<<New York Times corpus>>>
Table TABREF38 gives the results on NYT50 dataset. We see our BERT-ext + abs + RL outperforms all the extractive and abstractive models, except ROUGE-1 from BIBREF1. Comparing with two recent models that adapted BERT on their summarization models BIBREF1, BIBREF4, we can say that we proposed another method successfully leveraging BERT for summarization. In addition, the experiment proves the effectiveness of our RL training, with about 2 point improvement for each ROUGE metric.
<<</New York Times corpus>>>
<<<DUC-2002>>>
We also evaluated the models trained on the CNN/Daily Mail dataset on the out-of-domain DUC-2002 test set as shown in Table TABREF41. BERT-ext + abs + RL outperforms baseline models with large margins on all of the ROUGE scores. This result shows that our model generalizes better.
<<</DUC-2002>>>
<<</Results>>>
<<<Related Work>>>
There has been a variety of deep neural network models for abstractive document summarization. One of the most dominant structures is the sequence-to-sequence (seq2seq) models with attention mechanism BIBREF37, BIBREF38, BIBREF39. BIBREF25 introduced Pointer Generator network that implicitly combines the abstraction with the extraction, using copy mechanism BIBREF40, BIBREF41. More recently, there have been several studies that have attempted to improve the performance of the abstractive summarization by explicitly combining them with extractive models. Some notable examples include the use of inconsistency loss BIBREF9, key phrase extraction BIBREF42, BIBREF10, and sentence extraction with rewriting BIBREF8. Our model improves Sentence Rewriting with BERT as an extractor and summary-level rewards to optimize the extractor.
Reinforcement learning has been shown to be effective to directly optimize a non-differentiable objective in language generation including text summarization BIBREF43, BIBREF27, BIBREF35, BIBREF44, BIBREF11. BIBREF27 use actor-critic methods for language generation, using reward shaping BIBREF12 to solve the sparsity of training signals. Inspired by this, we generalize it to sentence extraction to give per step reward preserving optimality.
<<</Related Work>>>
<<<Conclusions>>>
We have improved Sentence Rewriting approaches for abstractive summarization, proposing a novel extractor architecture exploiting BERT and a novel training procedure which globally optimizes summary-level ROUGE metric. Our approach achieves the new state-of-the-art on both CNN/Daily Mail and New York Times datasets as well as much better generalization on DUC-2002 test set.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Abstract, Training"
],
"type": "disordered_section"
}
|
1909.08752
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Summary Level Training of Sentence Rewriting for Abstractive Summarization
<<<Abstract>>>
As an attempt to combine extractive and abstractive summarization, Sentence Rewriting models adopt the strategy of extracting salient sentences from a document first and then paraphrasing the selected ones to generate a summary. However, the existing models in this framework mostly rely on sentence-level rewards or suboptimal labels, causing a mismatch between a training objective and evaluation metric. In this paper, we present a novel training signal that directly maximizes summary-level ROUGE scores through reinforcement learning. In addition, we incorporate BERT into our model, making good use of its ability on natural language understanding. In extensive experiments, we show that a combination of our proposed model and training procedure obtains new state-of-the-art performance on both CNN/Daily Mail and New York Times datasets. We also demonstrate that it generalizes better on DUC-2002 test set.
<<</Abstract>>>
<<<Introduction>>>
The task of automatic text summarization aims to compress a textual document to a shorter highlight while keeping salient information of the original text. In general, there are two ways to do text summarization: Extractive and Abstractive BIBREF0. Extractive approaches generate summaries by selecting salient sentences or phrases from a source text, while abstractive approaches involve a process of paraphrasing or generating sentences to write a summary.
Recent work BIBREF1, BIBREF2 demonstrates that it is highly beneficial for extractive summarization models to incorporate pre-trained language models (LMs) such as BERT BIBREF3 into their architectures. However, the performance improvement from the pre-trained LMs is known to be relatively small in case of abstractive summarization BIBREF4, BIBREF5. This discrepancy may be due to the difference between extractive and abstractive approaches in ways of dealing with the task—the former classifies whether each sentence to be included in a summary, while the latter generates a whole summary from scratch. In other words, as most of the pre-trained LMs are designed to be of help to the tasks which can be categorized as classification including extractive summarization, they are not guaranteed to be advantageous to abstractive summarization models that should be capable of generating language BIBREF6, BIBREF7.
On the other hand, recent studies for abstractive summarization BIBREF8, BIBREF9, BIBREF10 have attempted to exploit extractive models. Among these, a notable one is BIBREF8, in which a sophisticated model called Reinforce-Selected Sentence Rewriting is proposed. The model consists of both an extractor and abstractor, where the extractor picks out salient sentences first from a source article, and then the abstractor rewrites and compresses the extracted sentences into a complete summary. It is further fine-tuned by training the extractor with the rewards derived from sentence-level ROUGE scores of the summary generated from the abstractor.
In this paper, we improve the model of BIBREF8, addressing two primary issues. Firstly, we argue there is a bottleneck in the existing extractor on the basis of the observation that its performance as an independent summarization model (i.e., without the abstractor) is no better than solid baselines such as selecting the first 3 sentences. To resolve the problem, we present a novel neural extractor exploiting the pre-trained LMs (BERT in this work) which are expected to perform better according to the recent studies BIBREF1, BIBREF2. Since the extractor is a sort of sentence classifier, we expect that it can make good use of the ability of pre-trained LMs which is proven to be effective in classification.
Secondly, the other point is that there is a mismatch between the training objective and evaluation metric; the previous work utilizes the sentence-level ROUGE scores as a reinforcement learning objective, while the final performance of a summarization model is evaluated by the summary-level ROUGE scores. Moreover, as BIBREF11 pointed out, sentences with the highest individual ROUGE scores do not necessarily lead to an optimal summary, since they may contain overlapping contents, causing verbose and redundant summaries. Therefore, we propose to directly use the summary-level ROUGE scores as an objective instead of the sentence-level scores. A potential problem arising from this apprsoach is the sparsity of training signals, because the summary-level ROUGE scores are calculated only once for each training episode. To alleviate this problem, we use reward shaping BIBREF12 to give an intermediate signal for each action, preserving the optimal policy.
We empirically demonstrate the superiority of our approach by achieving new state-of-the-art abstractive summarization results on CNN/Daily Mail and New York Times datasets BIBREF13, BIBREF14. It is worth noting that our approach shows large improvements especially on ROUGE-L score which is considered a means of assessing fluency BIBREF11. In addition, our model performs much better than previous work when testing on DUC-2002 dataset, showing better generalization and robustness of our model.
Our contributions in this work are three-fold: a novel successful application of pre-trained transformers for abstractive summarization; suggesting a training method to globally optimize sentence selection; achieving the state-of-the-art results on the benchmark datasets, CNN/Daily Mail and New York Times.
<<</Introduction>>>
<<<Background>>>
<<<Sentence Rewriting>>>
In this paper, we focus on single-document multi-sentence summarization and propose a neural abstractive model based on the Sentence Rewriting framework BIBREF8, BIBREF15 which consists of two parts: a neural network for the extractor and another network for the abstractor. The extractor network is designed to extract salient sentences from a source article. The abstractor network rewrites the extracted sentences into a short summary.
<<</Sentence Rewriting>>>
<<<Learning Sentence Selection>>>
The most common way to train extractor to select informative sentences is building extractive oracles as gold targets, and training with cross-entropy (CE) loss. An oracle consists of a set of sentences with the highest possible ROUGE scores. Building oracles is finding an optimal combination of sentences, where there are $2^n$ possible combinations for each example. Because of this, the exact optimization for ROUGE scores is intractable. Therefore, alternative methods identify the set of sentences with greedy search BIBREF16, sentence-level search BIBREF9, BIBREF17 or collective search using the limited number of sentences BIBREF15, which construct suboptimal oracles. Even if all the optimal oracles are found, training with CE loss using these labels will cause underfitting as it will only maximize probabilities for sentences in label sets and ignore all other sentences.
Alternatively, reinforcement learning (RL) can give room for exploration in the search space. BIBREF8, our baseline work, proposed to apply policy gradient methods to train an extractor. This approach makes an end-to-end trainable stochastic computation graph, encouraging the model to select sentences with high ROUGE scores. However, they define a reward for an action (sentence selection) as a sentence-level ROUGE score between the chosen sentence and a sentence in the ground truth summary for that time step. This leads the extractor agent to a suboptimal policy; the set of sentences matching individually with each sentence in a ground truth summary isn't necessarily optimal in terms of summary-level ROUGE score.
BIBREF11 proposed policy gradient with rewards from summary-level ROUGE. They defined an action as sampling a summary from candidate summaries that contain the limited number of plausible sentences. After training, a sentence is ranked high for selection if it often occurs in high scoring summaries. However, their approach still has a risk of ranking redundant sentences high; if two highly overlapped sentences have salient information, they would be ranked high together, increasing the probability of being sampled in one summary.
To tackle this problem, we propose a training method using reinforcement learning which globally optimizes summary-level ROUGE score and gives intermediate rewards to ease the learning.
<<</Learning Sentence Selection>>>
<<<Pre-trained Transformers>>>
Transferring representations from pre-trained transformer language models has been highly successful in the domain of natural language understanding tasks BIBREF18, BIBREF3, BIBREF19, BIBREF20. These methods first pre-train highly stacked transformer blocks BIBREF21 on a huge unlabeled corpus, and then fine-tune the models or representations on downstream tasks.
<<</Pre-trained Transformers>>>
<<</Background>>>
<<<Model>>>
Our model consists of two neural network modules, i.e. an extractor and abstractor. The extractor encodes a source document and chooses sentences from the document, and then the abstractor paraphrases the summary candidates. Formally, a single document consists of $n$ sentences $D=\lbrace s_1,s_2,\cdots ,s_n\rbrace $. We denote $i$-th sentence as $s_i=\lbrace w_{i1},w_{i2},\cdots ,w_{im}\rbrace $ where $w_{ij}$ is the $j$-th word in $s_i$. The extractor learns to pick out a subset of $D$ denoted as $\hat{D}=\lbrace \hat{s}_1,\hat{s}_2,\cdots ,\hat{s}_k|\hat{s}_i\in D\rbrace $ where $k$ sentences are selected. The abstractor rewrites each of the selected sentences to form a summary $S=\lbrace f(\hat{s}_1),f(\hat{s}_2),\cdots ,f(\hat{s}_k)\rbrace $, where $f$ is an abstracting function. And a gold summary consists of $l$ sentences $A=\lbrace a_1,a_2,\cdots ,a_l\rbrace $.
<<<Extractor Network>>>
The extractor is based on the encoder-decoder framework. We adapt BERT for the encoder to exploit contextualized representations from pre-trained transformers. BERT as the encoder maps the input sequence $D$ to sentence representation vectors $H=\lbrace h_1,h_2,\cdots ,h_n\rbrace $, where $h_i$ is for the $i$-th sentence in the document. Then, the decoder utilizes $H$ to extract $\hat{D}$ from $D$.
<<<Leveraging Pre-trained Transformers>>>
Although we require the encoder to output the representation for each sentence, the output vectors from BERT are grounded to tokens instead of sentences. Therefore, we modify the input sequence and embeddings of BERT as BIBREF1 did.
In the original BERT's configure, a [CLS] token is used to get features from one sentence or a pair of sentences. Since we need a symbol for each sentence representation, we insert the [CLS] token before each sentence. And we add a [SEP] token at the end of each sentence, which is used to differentiate multiple sentences. As a result, the vector for the $i$-th [CLS] symbol from the top BERT layer corresponds to the $i$-th sentence representation $h_i$.
In addition, we add interval segment embeddings as input for BERT to distinguish multiple sentences within a document. For $s_i$ we assign a segment embedding $E_A$ or $E_B$ conditioned on $i$ is odd or even. For example, for a consecutive sequence of sentences $s_1, s_2, s_3, s_4, s_5$, we assign $E_A, E_B, E_A, E_B, E_A$ in order. All the words in each sentence are assigned to the same segment embedding, i.e. segment embeddings for $w_{11}, w_{12},\cdots ,w_{1m}$ is $E_A,E_A,\cdots ,E_A$. An illustration for this procedure is shown in Figure FIGREF1.
<<</Leveraging Pre-trained Transformers>>>
<<<Sentence Selection>>>
We use LSTM Pointer Network BIBREF22 as the decoder to select the extracted sentences based on the above sentence representations. The decoder extracts sentences recurrently, producing a distribution over all of the remaining sentence representations excluding those already selected. Since we use the sequential model which selects one sentence at a time step, our decoder can consider the previously selected sentences. This property is needed to avoid selecting sentences that have overlapping information with the sentences extracted already.
As the decoder structure is almost the same with the previous work, we convey the equations of BIBREF8 to avoid confusion, with minor modifications to agree with our notations. Formally, the extraction probability is calculated as:
where $e_t$ is the output of the glimpse operation:
In Equation DISPLAY_FORM9, $z_t$ is the hidden state of the LSTM decoder at time $t$ (shown in green in Figure FIGREF1). All the $W$ and $v$ are trainable parameters.
<<</Sentence Selection>>>
<<</Extractor Network>>>
<<<Abstractor Network>>>
The abstractor network approximates $f$, which compresses and paraphrases an extracted document sentence to a concise summary sentence. We use the standard attention based sequence-to-sequence (seq2seq) model BIBREF23, BIBREF24 with the copying mechanism BIBREF25 for handling out-of-vocabulary (OOV) words. Our abstractor is practically identical to the one proposed in BIBREF8.
<<</Abstractor Network>>>
<<</Model>>>
<<<Training>>>
In our model, an extractor selects a series of sentences, and then an abstractor paraphrases them. As they work in different ways, we need different training strategies suitable for each of them. Training the abstractor is relatively obvious; maximizing log-likelihood for the next word given the previous ground truth words. However, there are several issues for extractor training. First, the extractor should consider the abstractor's rewriting process when it selects sentences. This causes a weak supervision problem BIBREF26, since the extractor gets training signals indirectly after paraphrasing processes are finished. In addition, thus this procedure contains sampling or maximum selection, the extractor performs a non-differentiable extraction. Lastly, although our goal is maximizing ROUGE scores, neural models cannot be trained directly by maximum likelihood estimation from them.
To address those issues above, we apply standard policy gradient methods, and we propose a novel training procedure for extractor which guides to the optimal policy in terms of the summary-level ROUGE. As usual in RL for sequence prediction, we pre-train submodules and apply RL to fine-tune the extractor.
<<<Training Submodules>>>
<<<Extractor Pre-training>>>
Starting from a poor random policy makes it difficult to train the extractor agent to converge towards the optimal policy. Thus, we pre-train the network using cross entropy (CE) loss like previous work BIBREF27, BIBREF8. However, there is no gold label for extractive summarization in most of the summarization datasets. Hence, we employ a greedy approach BIBREF16 to make the extractive oracles, where we add one sentence at a time incrementally to the summary, such that the ROUGE score of the current set of selected sentences is maximized for the entire ground truth summary. This doesn't guarantee optimal, but it is enough to teach the network to select plausible sentences. Formally, the network is trained to minimize the cross-entropy loss as follows:
where $s^*_t$ is the $t$-th generated oracle sentence.
<<</Extractor Pre-training>>>
<<<Abstractor Training>>>
For the abstractor training, we should create training pairs for input and target sentences. As the abstractor paraphrases on sentence-level, we take a sentence-level search for each ground-truth summary sentence. We find the most similar document sentence $s^{\prime }_t$ by:
And then the abstractor is trained as a usual sequence-to-sequence model to minimize the cross-entropy loss:
where $w^a_j$ is the $j$-th word of the target sentence $a_t$, and $\Phi $ is the encoded representation for $s^{\prime }_t$.
<<</Abstractor Training>>>
<<</Training Submodules>>>
<<<Guiding to the Optimal Policy>>>
To optimize ROUGE metric directly, we assume the extractor as an agent in reinforcement learning paradigm BIBREF28. We view the extractor has a stochastic policy that generates actions (sentence selection) and receives the score of final evaluation metric (summary-level ROUGE in our case) as the return
While we are ultimately interested in the maximization of the score of a complete summary, simply awarding this score at the last step provides a very sparse training signal. For this reason we define intermediate rewards using reward shaping BIBREF12, which is inspired by BIBREF27's attempt for sequence prediction. Namely, we compute summary-level score values for all intermediate summaries:
The reward for each step $r_t$ is the difference between the consecutive pairs of scores:
This measures an amount of increase or decrease in the summary-level score from selecting $\hat{s}_t$. Using the shaped reward $r_t$ instead of awarding the whole score $R$ at the last step does not change the optimal policy BIBREF12. We define a discounted future reward for each step as $R_t=\sum _{t=1}^{k}\gamma ^tr_{t+1}$, where $\gamma $ is a discount factor.
Additionally, we add `stop' action to the action space, by concatenating trainable parameters $h_{\text{stop}}$ (the same dimension as $h_i$) to $H$. The agent treats it as another candidate to extract. When it selects `stop', an extracting episode ends and the final return is given. This encourages the model to extract additional sentences only when they are expected to increase the final return.
Following BIBREF8, we use the Advantage Actor Critic BIBREF29 method to train. We add a critic network to estimate a value function $V_t(D,\hat{s}_1,\cdots ,\hat{s}_{t-1})$, which then is used to compute advantage of each action (we will omit the current state $(D,\hat{s}_1,\cdots ,\hat{s}_{t-1})$ to simplify):
where $Q_t(s_i)$ is the expected future reward for selecting $s_i$ at the current step $t$. We maximize this advantage with the policy gradient with the Monte-Carlo sample ($A_t(s_i) \approx R_t - V_t$):
where $\theta _\pi $ is the trainable parameters of the actor network (original extractor). And the critic is trained to minimize the square loss:
where $\theta _\psi $ is the trainable parameters of the critic network.
<<</Guiding to the Optimal Policy>>>
<<</Training>>>
<<<Experimental Setup>>>
<<<Datasets>>>
We evaluate the proposed approach on the CNN/Daily Mail BIBREF13 and New York Times BIBREF30 dataset, which are both standard corpora for multi-sentence abstractive summarization. Additionally, we test generalization of our model on DUC-2002 test set.
CNN/Daily Mail dataset consists of more than 300K news articles and each of them is paired with several highlights. We used the standard splits of BIBREF13 for training, validation and testing (90,226/1,220/1,093 documents for CNN and 196,961/12,148/10,397 for Daily Mail). We did not anonymize entities. We followed the preprocessing methods in BIBREF25 after splitting sentences by Stanford CoreNLP BIBREF31.
The New York Times dataset also consists of many news articles. We followed the dataset splits of BIBREF14; 100,834 for training and 9,706 for test examples. And we also followed the filtering procedure of them, removing documents with summaries that are shorter than 50 words. The final test set (NYT50) contains 3,452 examples out of the original 9,706.
The DUC-2002 dataset contains 567 document-summary pairs for single-document summarization. As a single document can have multiple summaries, we made one pair per summary. We used this dataset as a test set for our model trained on CNN/Daily Mail dataset to test generalization.
<<</Datasets>>>
<<<Implementation Details>>>
Our extractor is built on $\text{BERT}_\text{BASE}$ with fine-tuning, smaller version than $\text{BERT}_\text{LARGE}$ due to limitation of time and space. We set LSTM hidden size as 256 for all of our models. To initialize word embeddings for our abstractor, we use word2vec BIBREF32 of 128 dimensions trained on the same corpus. We optimize our model with Adam optimizer BIBREF33 with $\beta _1=0.9$ and $\beta _2=0.999$. For extractor pre-training, we use learning rate schedule following BIBREF21 with $warmup=10000$:
And we set learning rate $1e^{-3}$ for abstractor and $4e^{-6}$ for RL training. We apply gradient clipping using L2 norm with threshold $2.0$. For RL training, we use $\gamma =0.95$ for the discount factor. To ease learning $h_{\text{stop}}$, we set the reward for the stop action to $\lambda \cdot \text{ROUGE-L}^{\text{summ}}_{F_1}(S, A)$, where $\lambda $ is a stop coefficient set to $0.08$. Our critic network shares the encoder with the actor (extractor) and has the same architecture with it except the output layer, estimating scalar for the state value. And the critic is initialized with the parameters of the pre-trained extractor where it has the same architecture.
<<</Implementation Details>>>
<<<Evaluation>>>
We evaluate the performance of our method using different variants of ROUGE metric computed with respect to the gold summaries. On the CNN/Daily Mail and DUC-2002 dataset, we use standard ROUGE-1, ROUGE-2, and ROUGE-L BIBREF34 on full length $F_1$ with stemming as previous work did BIBREF16, BIBREF25, BIBREF8. On NYT50 dataset, following BIBREF14 and BIBREF35, we used the limited length ROUGE recall metric, truncating the generated summary to the length of the ground truth summary.
<<</Evaluation>>>
<<</Experimental Setup>>>
<<<Results>>>
<<<CNN/Daily Mail>>>
Table TABREF24 shows the experimental results on CNN/Daily Mail dataset, with extractive models in the top block and abstractive models in the bottom block. For comparison, we list the performance of many recent approaches with ours.
<<<Extractive Summarization>>>
As BIBREF25 showed, the first 3 sentences (lead-3) in an article form a strong summarization baseline in CNN/Daily Mail dataset. Therefore, the very first objective of extractive models is to outperform the simple method which always returns 3 or 4 sentences at the top. However, as Table TABREF27 shows, ROUGE scores of lead baselines and extractors from previous work in Sentence Rewrite framework BIBREF8, BIBREF15 are almost tie. We can easily conjecture that the limited performances of their full model are due to their extractor networks. Our extractor network with BERT (BERT-ext), as a single model, outperforms those models with large margins. Adding reinforcement learning (BERT-ext + RL) gives higher performance, which is competitive with other extractive approaches using pre-trained Transformers (see Table TABREF24). This shows the effectiveness of our learning method.
<<</Extractive Summarization>>>
<<<Abstractive Summarization>>>
Our abstractive approaches combine the extractor with the abstractor. The combined model (BERT-ext + abs) without additional RL training outperforms the Sentence Rewrite model BIBREF8 without reranking, showing the effectiveness of our extractor network. With the proposed RL training procedure (BERT-ext + abs + RL), our model exceeds the best model of BIBREF8. In addition, the result is better than those of all the other abstractive methods exploiting extractive approaches in them BIBREF9, BIBREF8, BIBREF10.
<<</Abstractive Summarization>>>
<<<Redundancy Control>>>
Although the proposed RL training inherently gives training signals that induce the model to avoid redundancy across sentences, there can be still remaining overlaps between extracted sentences. We found that the additional methods reducing redundancies can improve the summarization quality, especially on CNN/Daily Mail dataset.
We tried Trigram Blocking BIBREF1 for extractor and Reranking BIBREF8 for abstractor, and we empirically found that the reranking only improves the performance. This helps the model to compress the extracted sentences focusing on disjoint information, even if there are some partial overlaps between the sentences. Our best abstractive model (BERT-ext + abs + RL + rerank) achieves the new state-of-the-art performance for abstractive summarization in terms of average ROUGE score, with large margins on ROUGE-L.
However, we empirically found that the reranking method has no effect or has negative effect on NYT50 or DUC-2002 dataset. Hence, we don't apply it for the remaining datasets.
<<</Redundancy Control>>>
<<<Combinatorial Reward>>>
Before seeing the effects of our summary-level rewards on final results, we check the upper bounds of different training signals for the full model. All the document sentences are paraphrased with our trained abstractor, and then we find the best set for each search method. Sentence-matching finds sentences with the highest ROUGE-L score for each sentence in the gold summary. This search method matches with the best reward from BIBREF8. Greedy Search is the same method explained for extractor pre-training in section SECREF11. Combination Search selects a set of sentences which has the highest summary-level ROUGE-L score, from all the possible combinations of sentences. Due to time constraints, we limited the maximum number of sentences to 5. This method corresponds to our final return in RL training.
Table TABREF31 shows the summary-level ROUGE scores of previously explained methods. We see considerable gaps between Sentence-matching and Greedy Search, while the scores of Greedy Search are close to those of Combination Search. Note that since we limited the number of sentences for Combination Search, the exact scores for it would be higher. The scores can be interpreted to be upper bounds for corresponding training methods. This result supports our training strategy; pre-training with Greedy Search and final optimization with the combinatorial return.
Additionally, we experiment to verify the contribution of our training method. We train the same model with different training signals; Sentence-level reward from BIBREF8 and combinatorial reward from ours. The results are shown in Table TABREF34. Both with and without reranking, the models trained with the combinatorial reward consistently outperform those trained with the sentence-level reward.
<<</Combinatorial Reward>>>
<<<Human Evaluation>>>
We also conduct human evaluation to ensure robustness of our training procedure. We measure relevance and readability of the summaries. Relevance is based on the summary containing important, salient information from the input article, being correct by avoiding contradictory/unrelated information, and avoiding repeated/redundant information. Readability is based on the summarys fluency, grammaticality, and coherence. To evaluate both these criteria, we design a Amazon Mechanical Turk experiment based on ranking method, inspired by BIBREF36. We randomly select 20 samples from the CNN/Daily Mail test set and ask the human testers (3 for each sample) to rank summaries (for relevance and readability) produced by 3 different models: our final model, that of BIBREF8 and that of BIBREF1. 2, 1 and 0 points were given according to the ranking. The models were anonymized and randomly shuffled. Following previous work, the input article and ground truth summaries are also shown to the human participants in addition to the three model summaries. From the results shown in Table TABREF36, we can see that our model is better in relevance compared to others. In terms of readability, there was no noticeable difference.
<<</Human Evaluation>>>
<<</CNN/Daily Mail>>>
<<<New York Times corpus>>>
Table TABREF38 gives the results on NYT50 dataset. We see our BERT-ext + abs + RL outperforms all the extractive and abstractive models, except ROUGE-1 from BIBREF1. Comparing with two recent models that adapted BERT on their summarization models BIBREF1, BIBREF4, we can say that we proposed another method successfully leveraging BERT for summarization. In addition, the experiment proves the effectiveness of our RL training, with about 2 point improvement for each ROUGE metric.
<<</New York Times corpus>>>
<<<DUC-2002>>>
We also evaluated the models trained on the CNN/Daily Mail dataset on the out-of-domain DUC-2002 test set as shown in Table TABREF41. BERT-ext + abs + RL outperforms baseline models with large margins on all of the ROUGE scores. This result shows that our model generalizes better.
<<</DUC-2002>>>
<<</Results>>>
<<<Related Work>>>
There has been a variety of deep neural network models for abstractive document summarization. One of the most dominant structures is the sequence-to-sequence (seq2seq) models with attention mechanism BIBREF37, BIBREF38, BIBREF39. BIBREF25 introduced Pointer Generator network that implicitly combines the abstraction with the extraction, using copy mechanism BIBREF40, BIBREF41. More recently, there have been several studies that have attempted to improve the performance of the abstractive summarization by explicitly combining them with extractive models. Some notable examples include the use of inconsistency loss BIBREF9, key phrase extraction BIBREF42, BIBREF10, and sentence extraction with rewriting BIBREF8. Our model improves Sentence Rewriting with BERT as an extractor and summary-level rewards to optimize the extractor.
Reinforcement learning has been shown to be effective to directly optimize a non-differentiable objective in language generation including text summarization BIBREF43, BIBREF27, BIBREF35, BIBREF44, BIBREF11. BIBREF27 use actor-critic methods for language generation, using reward shaping BIBREF12 to solve the sparsity of training signals. Inspired by this, we generalize it to sentence extraction to give per step reward preserving optimality.
<<</Related Work>>>
<<<Conclusions>>>
We have improved Sentence Rewriting approaches for abstractive summarization, proposing a novel extractor architecture exploiting BERT and a novel training procedure which globally optimizes summary-level ROUGE metric. Our approach achieves the new state-of-the-art on both CNN/Daily Mail and New York Times datasets as well as much better generalization on DUC-2002 test set.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Introduction, Experimental Setup"
],
"type": "disordered_section"
}
|
1909.08752
|
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure.
Please identify the two sections and output the corresponding section names.
The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content.
Context: <<<Title>>>
Summary Level Training of Sentence Rewriting for Abstractive Summarization
<<<Abstract>>>
As an attempt to combine extractive and abstractive summarization, Sentence Rewriting models adopt the strategy of extracting salient sentences from a document first and then paraphrasing the selected ones to generate a summary. However, the existing models in this framework mostly rely on sentence-level rewards or suboptimal labels, causing a mismatch between a training objective and evaluation metric. In this paper, we present a novel training signal that directly maximizes summary-level ROUGE scores through reinforcement learning. In addition, we incorporate BERT into our model, making good use of its ability on natural language understanding. In extensive experiments, we show that a combination of our proposed model and training procedure obtains new state-of-the-art performance on both CNN/Daily Mail and New York Times datasets. We also demonstrate that it generalizes better on DUC-2002 test set.
<<</Abstract>>>
<<<Introduction>>>
The task of automatic text summarization aims to compress a textual document to a shorter highlight while keeping salient information of the original text. In general, there are two ways to do text summarization: Extractive and Abstractive BIBREF0. Extractive approaches generate summaries by selecting salient sentences or phrases from a source text, while abstractive approaches involve a process of paraphrasing or generating sentences to write a summary.
Recent work BIBREF1, BIBREF2 demonstrates that it is highly beneficial for extractive summarization models to incorporate pre-trained language models (LMs) such as BERT BIBREF3 into their architectures. However, the performance improvement from the pre-trained LMs is known to be relatively small in case of abstractive summarization BIBREF4, BIBREF5. This discrepancy may be due to the difference between extractive and abstractive approaches in ways of dealing with the task—the former classifies whether each sentence to be included in a summary, while the latter generates a whole summary from scratch. In other words, as most of the pre-trained LMs are designed to be of help to the tasks which can be categorized as classification including extractive summarization, they are not guaranteed to be advantageous to abstractive summarization models that should be capable of generating language BIBREF6, BIBREF7.
On the other hand, recent studies for abstractive summarization BIBREF8, BIBREF9, BIBREF10 have attempted to exploit extractive models. Among these, a notable one is BIBREF8, in which a sophisticated model called Reinforce-Selected Sentence Rewriting is proposed. The model consists of both an extractor and abstractor, where the extractor picks out salient sentences first from a source article, and then the abstractor rewrites and compresses the extracted sentences into a complete summary. It is further fine-tuned by training the extractor with the rewards derived from sentence-level ROUGE scores of the summary generated from the abstractor.
In this paper, we improve the model of BIBREF8, addressing two primary issues. Firstly, we argue there is a bottleneck in the existing extractor on the basis of the observation that its performance as an independent summarization model (i.e., without the abstractor) is no better than solid baselines such as selecting the first 3 sentences. To resolve the problem, we present a novel neural extractor exploiting the pre-trained LMs (BERT in this work) which are expected to perform better according to the recent studies BIBREF1, BIBREF2. Since the extractor is a sort of sentence classifier, we expect that it can make good use of the ability of pre-trained LMs which is proven to be effective in classification.
Secondly, the other point is that there is a mismatch between the training objective and evaluation metric; the previous work utilizes the sentence-level ROUGE scores as a reinforcement learning objective, while the final performance of a summarization model is evaluated by the summary-level ROUGE scores. Moreover, as BIBREF11 pointed out, sentences with the highest individual ROUGE scores do not necessarily lead to an optimal summary, since they may contain overlapping contents, causing verbose and redundant summaries. Therefore, we propose to directly use the summary-level ROUGE scores as an objective instead of the sentence-level scores. A potential problem arising from this apprsoach is the sparsity of training signals, because the summary-level ROUGE scores are calculated only once for each training episode. To alleviate this problem, we use reward shaping BIBREF12 to give an intermediate signal for each action, preserving the optimal policy.
We empirically demonstrate the superiority of our approach by achieving new state-of-the-art abstractive summarization results on CNN/Daily Mail and New York Times datasets BIBREF13, BIBREF14. It is worth noting that our approach shows large improvements especially on ROUGE-L score which is considered a means of assessing fluency BIBREF11. In addition, our model performs much better than previous work when testing on DUC-2002 dataset, showing better generalization and robustness of our model.
Our contributions in this work are three-fold: a novel successful application of pre-trained transformers for abstractive summarization; suggesting a training method to globally optimize sentence selection; achieving the state-of-the-art results on the benchmark datasets, CNN/Daily Mail and New York Times.
<<</Introduction>>>
<<<Background>>>
<<<Sentence Rewriting>>>
In this paper, we focus on single-document multi-sentence summarization and propose a neural abstractive model based on the Sentence Rewriting framework BIBREF8, BIBREF15 which consists of two parts: a neural network for the extractor and another network for the abstractor. The extractor network is designed to extract salient sentences from a source article. The abstractor network rewrites the extracted sentences into a short summary.
<<</Sentence Rewriting>>>
<<<Learning Sentence Selection>>>
The most common way to train extractor to select informative sentences is building extractive oracles as gold targets, and training with cross-entropy (CE) loss. An oracle consists of a set of sentences with the highest possible ROUGE scores. Building oracles is finding an optimal combination of sentences, where there are $2^n$ possible combinations for each example. Because of this, the exact optimization for ROUGE scores is intractable. Therefore, alternative methods identify the set of sentences with greedy search BIBREF16, sentence-level search BIBREF9, BIBREF17 or collective search using the limited number of sentences BIBREF15, which construct suboptimal oracles. Even if all the optimal oracles are found, training with CE loss using these labels will cause underfitting as it will only maximize probabilities for sentences in label sets and ignore all other sentences.
Alternatively, reinforcement learning (RL) can give room for exploration in the search space. BIBREF8, our baseline work, proposed to apply policy gradient methods to train an extractor. This approach makes an end-to-end trainable stochastic computation graph, encouraging the model to select sentences with high ROUGE scores. However, they define a reward for an action (sentence selection) as a sentence-level ROUGE score between the chosen sentence and a sentence in the ground truth summary for that time step. This leads the extractor agent to a suboptimal policy; the set of sentences matching individually with each sentence in a ground truth summary isn't necessarily optimal in terms of summary-level ROUGE score.
BIBREF11 proposed policy gradient with rewards from summary-level ROUGE. They defined an action as sampling a summary from candidate summaries that contain the limited number of plausible sentences. After training, a sentence is ranked high for selection if it often occurs in high scoring summaries. However, their approach still has a risk of ranking redundant sentences high; if two highly overlapped sentences have salient information, they would be ranked high together, increasing the probability of being sampled in one summary.
To tackle this problem, we propose a training method using reinforcement learning which globally optimizes summary-level ROUGE score and gives intermediate rewards to ease the learning.
<<</Learning Sentence Selection>>>
<<<Pre-trained Transformers>>>
Transferring representations from pre-trained transformer language models has been highly successful in the domain of natural language understanding tasks BIBREF18, BIBREF3, BIBREF19, BIBREF20. These methods first pre-train highly stacked transformer blocks BIBREF21 on a huge unlabeled corpus, and then fine-tune the models or representations on downstream tasks.
<<</Pre-trained Transformers>>>
<<</Background>>>
<<<Model>>>
Our model consists of two neural network modules, i.e. an extractor and abstractor. The extractor encodes a source document and chooses sentences from the document, and then the abstractor paraphrases the summary candidates. Formally, a single document consists of $n$ sentences $D=\lbrace s_1,s_2,\cdots ,s_n\rbrace $. We denote $i$-th sentence as $s_i=\lbrace w_{i1},w_{i2},\cdots ,w_{im}\rbrace $ where $w_{ij}$ is the $j$-th word in $s_i$. The extractor learns to pick out a subset of $D$ denoted as $\hat{D}=\lbrace \hat{s}_1,\hat{s}_2,\cdots ,\hat{s}_k|\hat{s}_i\in D\rbrace $ where $k$ sentences are selected. The abstractor rewrites each of the selected sentences to form a summary $S=\lbrace f(\hat{s}_1),f(\hat{s}_2),\cdots ,f(\hat{s}_k)\rbrace $, where $f$ is an abstracting function. And a gold summary consists of $l$ sentences $A=\lbrace a_1,a_2,\cdots ,a_l\rbrace $.
<<<Extractor Network>>>
The extractor is based on the encoder-decoder framework. We adapt BERT for the encoder to exploit contextualized representations from pre-trained transformers. BERT as the encoder maps the input sequence $D$ to sentence representation vectors $H=\lbrace h_1,h_2,\cdots ,h_n\rbrace $, where $h_i$ is for the $i$-th sentence in the document. Then, the decoder utilizes $H$ to extract $\hat{D}$ from $D$.
<<<Leveraging Pre-trained Transformers>>>
Although we require the encoder to output the representation for each sentence, the output vectors from BERT are grounded to tokens instead of sentences. Therefore, we modify the input sequence and embeddings of BERT as BIBREF1 did.
In the original BERT's configure, a [CLS] token is used to get features from one sentence or a pair of sentences. Since we need a symbol for each sentence representation, we insert the [CLS] token before each sentence. And we add a [SEP] token at the end of each sentence, which is used to differentiate multiple sentences. As a result, the vector for the $i$-th [CLS] symbol from the top BERT layer corresponds to the $i$-th sentence representation $h_i$.
In addition, we add interval segment embeddings as input for BERT to distinguish multiple sentences within a document. For $s_i$ we assign a segment embedding $E_A$ or $E_B$ conditioned on $i$ is odd or even. For example, for a consecutive sequence of sentences $s_1, s_2, s_3, s_4, s_5$, we assign $E_A, E_B, E_A, E_B, E_A$ in order. All the words in each sentence are assigned to the same segment embedding, i.e. segment embeddings for $w_{11}, w_{12},\cdots ,w_{1m}$ is $E_A,E_A,\cdots ,E_A$. An illustration for this procedure is shown in Figure FIGREF1.
<<</Leveraging Pre-trained Transformers>>>
<<<Sentence Selection>>>
We use LSTM Pointer Network BIBREF22 as the decoder to select the extracted sentences based on the above sentence representations. The decoder extracts sentences recurrently, producing a distribution over all of the remaining sentence representations excluding those already selected. Since we use the sequential model which selects one sentence at a time step, our decoder can consider the previously selected sentences. This property is needed to avoid selecting sentences that have overlapping information with the sentences extracted already.
As the decoder structure is almost the same with the previous work, we convey the equations of BIBREF8 to avoid confusion, with minor modifications to agree with our notations. Formally, the extraction probability is calculated as:
where $e_t$ is the output of the glimpse operation:
In Equation DISPLAY_FORM9, $z_t$ is the hidden state of the LSTM decoder at time $t$ (shown in green in Figure FIGREF1). All the $W$ and $v$ are trainable parameters.
<<</Sentence Selection>>>
<<</Extractor Network>>>
<<<Abstractor Network>>>
The abstractor network approximates $f$, which compresses and paraphrases an extracted document sentence to a concise summary sentence. We use the standard attention based sequence-to-sequence (seq2seq) model BIBREF23, BIBREF24 with the copying mechanism BIBREF25 for handling out-of-vocabulary (OOV) words. Our abstractor is practically identical to the one proposed in BIBREF8.
<<</Abstractor Network>>>
<<</Model>>>
<<<Training>>>
In our model, an extractor selects a series of sentences, and then an abstractor paraphrases them. As they work in different ways, we need different training strategies suitable for each of them. Training the abstractor is relatively obvious; maximizing log-likelihood for the next word given the previous ground truth words. However, there are several issues for extractor training. First, the extractor should consider the abstractor's rewriting process when it selects sentences. This causes a weak supervision problem BIBREF26, since the extractor gets training signals indirectly after paraphrasing processes are finished. In addition, thus this procedure contains sampling or maximum selection, the extractor performs a non-differentiable extraction. Lastly, although our goal is maximizing ROUGE scores, neural models cannot be trained directly by maximum likelihood estimation from them.
To address those issues above, we apply standard policy gradient methods, and we propose a novel training procedure for extractor which guides to the optimal policy in terms of the summary-level ROUGE. As usual in RL for sequence prediction, we pre-train submodules and apply RL to fine-tune the extractor.
<<<Training Submodules>>>
<<<Extractor Pre-training>>>
Starting from a poor random policy makes it difficult to train the extractor agent to converge towards the optimal policy. Thus, we pre-train the network using cross entropy (CE) loss like previous work BIBREF27, BIBREF8. However, there is no gold label for extractive summarization in most of the summarization datasets. Hence, we employ a greedy approach BIBREF16 to make the extractive oracles, where we add one sentence at a time incrementally to the summary, such that the ROUGE score of the current set of selected sentences is maximized for the entire ground truth summary. This doesn't guarantee optimal, but it is enough to teach the network to select plausible sentences. Formally, the network is trained to minimize the cross-entropy loss as follows:
where $s^*_t$ is the $t$-th generated oracle sentence.
<<</Extractor Pre-training>>>
<<<Abstractor Training>>>
For the abstractor training, we should create training pairs for input and target sentences. As the abstractor paraphrases on sentence-level, we take a sentence-level search for each ground-truth summary sentence. We find the most similar document sentence $s^{\prime }_t$ by:
And then the abstractor is trained as a usual sequence-to-sequence model to minimize the cross-entropy loss:
where $w^a_j$ is the $j$-th word of the target sentence $a_t$, and $\Phi $ is the encoded representation for $s^{\prime }_t$.
<<</Abstractor Training>>>
<<</Training Submodules>>>
<<<Guiding to the Optimal Policy>>>
To optimize ROUGE metric directly, we assume the extractor as an agent in reinforcement learning paradigm BIBREF28. We view the extractor has a stochastic policy that generates actions (sentence selection) and receives the score of final evaluation metric (summary-level ROUGE in our case) as the return
While we are ultimately interested in the maximization of the score of a complete summary, simply awarding this score at the last step provides a very sparse training signal. For this reason we define intermediate rewards using reward shaping BIBREF12, which is inspired by BIBREF27's attempt for sequence prediction. Namely, we compute summary-level score values for all intermediate summaries:
The reward for each step $r_t$ is the difference between the consecutive pairs of scores:
This measures an amount of increase or decrease in the summary-level score from selecting $\hat{s}_t$. Using the shaped reward $r_t$ instead of awarding the whole score $R$ at the last step does not change the optimal policy BIBREF12. We define a discounted future reward for each step as $R_t=\sum _{t=1}^{k}\gamma ^tr_{t+1}$, where $\gamma $ is a discount factor.
Additionally, we add `stop' action to the action space, by concatenating trainable parameters $h_{\text{stop}}$ (the same dimension as $h_i$) to $H$. The agent treats it as another candidate to extract. When it selects `stop', an extracting episode ends and the final return is given. This encourages the model to extract additional sentences only when they are expected to increase the final return.
Following BIBREF8, we use the Advantage Actor Critic BIBREF29 method to train. We add a critic network to estimate a value function $V_t(D,\hat{s}_1,\cdots ,\hat{s}_{t-1})$, which then is used to compute advantage of each action (we will omit the current state $(D,\hat{s}_1,\cdots ,\hat{s}_{t-1})$ to simplify):
where $Q_t(s_i)$ is the expected future reward for selecting $s_i$ at the current step $t$. We maximize this advantage with the policy gradient with the Monte-Carlo sample ($A_t(s_i) \approx R_t - V_t$):
where $\theta _\pi $ is the trainable parameters of the actor network (original extractor). And the critic is trained to minimize the square loss:
where $\theta _\psi $ is the trainable parameters of the critic network.
<<</Guiding to the Optimal Policy>>>
<<</Training>>>
<<<Experimental Setup>>>
<<<Datasets>>>
We evaluate the proposed approach on the CNN/Daily Mail BIBREF13 and New York Times BIBREF30 dataset, which are both standard corpora for multi-sentence abstractive summarization. Additionally, we test generalization of our model on DUC-2002 test set.
CNN/Daily Mail dataset consists of more than 300K news articles and each of them is paired with several highlights. We used the standard splits of BIBREF13 for training, validation and testing (90,226/1,220/1,093 documents for CNN and 196,961/12,148/10,397 for Daily Mail). We did not anonymize entities. We followed the preprocessing methods in BIBREF25 after splitting sentences by Stanford CoreNLP BIBREF31.
The New York Times dataset also consists of many news articles. We followed the dataset splits of BIBREF14; 100,834 for training and 9,706 for test examples. And we also followed the filtering procedure of them, removing documents with summaries that are shorter than 50 words. The final test set (NYT50) contains 3,452 examples out of the original 9,706.
The DUC-2002 dataset contains 567 document-summary pairs for single-document summarization. As a single document can have multiple summaries, we made one pair per summary. We used this dataset as a test set for our model trained on CNN/Daily Mail dataset to test generalization.
<<</Datasets>>>
<<<Implementation Details>>>
Our extractor is built on $\text{BERT}_\text{BASE}$ with fine-tuning, smaller version than $\text{BERT}_\text{LARGE}$ due to limitation of time and space. We set LSTM hidden size as 256 for all of our models. To initialize word embeddings for our abstractor, we use word2vec BIBREF32 of 128 dimensions trained on the same corpus. We optimize our model with Adam optimizer BIBREF33 with $\beta _1=0.9$ and $\beta _2=0.999$. For extractor pre-training, we use learning rate schedule following BIBREF21 with $warmup=10000$:
And we set learning rate $1e^{-3}$ for abstractor and $4e^{-6}$ for RL training. We apply gradient clipping using L2 norm with threshold $2.0$. For RL training, we use $\gamma =0.95$ for the discount factor. To ease learning $h_{\text{stop}}$, we set the reward for the stop action to $\lambda \cdot \text{ROUGE-L}^{\text{summ}}_{F_1}(S, A)$, where $\lambda $ is a stop coefficient set to $0.08$. Our critic network shares the encoder with the actor (extractor) and has the same architecture with it except the output layer, estimating scalar for the state value. And the critic is initialized with the parameters of the pre-trained extractor where it has the same architecture.
<<</Implementation Details>>>
<<<Evaluation>>>
We evaluate the performance of our method using different variants of ROUGE metric computed with respect to the gold summaries. On the CNN/Daily Mail and DUC-2002 dataset, we use standard ROUGE-1, ROUGE-2, and ROUGE-L BIBREF34 on full length $F_1$ with stemming as previous work did BIBREF16, BIBREF25, BIBREF8. On NYT50 dataset, following BIBREF14 and BIBREF35, we used the limited length ROUGE recall metric, truncating the generated summary to the length of the ground truth summary.
<<</Evaluation>>>
<<</Experimental Setup>>>
<<<Results>>>
<<<CNN/Daily Mail>>>
Table TABREF24 shows the experimental results on CNN/Daily Mail dataset, with extractive models in the top block and abstractive models in the bottom block. For comparison, we list the performance of many recent approaches with ours.
<<<Extractive Summarization>>>
As BIBREF25 showed, the first 3 sentences (lead-3) in an article form a strong summarization baseline in CNN/Daily Mail dataset. Therefore, the very first objective of extractive models is to outperform the simple method which always returns 3 or 4 sentences at the top. However, as Table TABREF27 shows, ROUGE scores of lead baselines and extractors from previous work in Sentence Rewrite framework BIBREF8, BIBREF15 are almost tie. We can easily conjecture that the limited performances of their full model are due to their extractor networks. Our extractor network with BERT (BERT-ext), as a single model, outperforms those models with large margins. Adding reinforcement learning (BERT-ext + RL) gives higher performance, which is competitive with other extractive approaches using pre-trained Transformers (see Table TABREF24). This shows the effectiveness of our learning method.
<<</Extractive Summarization>>>
<<<Abstractive Summarization>>>
Our abstractive approaches combine the extractor with the abstractor. The combined model (BERT-ext + abs) without additional RL training outperforms the Sentence Rewrite model BIBREF8 without reranking, showing the effectiveness of our extractor network. With the proposed RL training procedure (BERT-ext + abs + RL), our model exceeds the best model of BIBREF8. In addition, the result is better than those of all the other abstractive methods exploiting extractive approaches in them BIBREF9, BIBREF8, BIBREF10.
<<</Abstractive Summarization>>>
<<<Redundancy Control>>>
Although the proposed RL training inherently gives training signals that induce the model to avoid redundancy across sentences, there can be still remaining overlaps between extracted sentences. We found that the additional methods reducing redundancies can improve the summarization quality, especially on CNN/Daily Mail dataset.
We tried Trigram Blocking BIBREF1 for extractor and Reranking BIBREF8 for abstractor, and we empirically found that the reranking only improves the performance. This helps the model to compress the extracted sentences focusing on disjoint information, even if there are some partial overlaps between the sentences. Our best abstractive model (BERT-ext + abs + RL + rerank) achieves the new state-of-the-art performance for abstractive summarization in terms of average ROUGE score, with large margins on ROUGE-L.
However, we empirically found that the reranking method has no effect or has negative effect on NYT50 or DUC-2002 dataset. Hence, we don't apply it for the remaining datasets.
<<</Redundancy Control>>>
<<<Combinatorial Reward>>>
Before seeing the effects of our summary-level rewards on final results, we check the upper bounds of different training signals for the full model. All the document sentences are paraphrased with our trained abstractor, and then we find the best set for each search method. Sentence-matching finds sentences with the highest ROUGE-L score for each sentence in the gold summary. This search method matches with the best reward from BIBREF8. Greedy Search is the same method explained for extractor pre-training in section SECREF11. Combination Search selects a set of sentences which has the highest summary-level ROUGE-L score, from all the possible combinations of sentences. Due to time constraints, we limited the maximum number of sentences to 5. This method corresponds to our final return in RL training.
Table TABREF31 shows the summary-level ROUGE scores of previously explained methods. We see considerable gaps between Sentence-matching and Greedy Search, while the scores of Greedy Search are close to those of Combination Search. Note that since we limited the number of sentences for Combination Search, the exact scores for it would be higher. The scores can be interpreted to be upper bounds for corresponding training methods. This result supports our training strategy; pre-training with Greedy Search and final optimization with the combinatorial return.
Additionally, we experiment to verify the contribution of our training method. We train the same model with different training signals; Sentence-level reward from BIBREF8 and combinatorial reward from ours. The results are shown in Table TABREF34. Both with and without reranking, the models trained with the combinatorial reward consistently outperform those trained with the sentence-level reward.
<<</Combinatorial Reward>>>
<<<Human Evaluation>>>
We also conduct human evaluation to ensure robustness of our training procedure. We measure relevance and readability of the summaries. Relevance is based on the summary containing important, salient information from the input article, being correct by avoiding contradictory/unrelated information, and avoiding repeated/redundant information. Readability is based on the summarys fluency, grammaticality, and coherence. To evaluate both these criteria, we design a Amazon Mechanical Turk experiment based on ranking method, inspired by BIBREF36. We randomly select 20 samples from the CNN/Daily Mail test set and ask the human testers (3 for each sample) to rank summaries (for relevance and readability) produced by 3 different models: our final model, that of BIBREF8 and that of BIBREF1. 2, 1 and 0 points were given according to the ranking. The models were anonymized and randomly shuffled. Following previous work, the input article and ground truth summaries are also shown to the human participants in addition to the three model summaries. From the results shown in Table TABREF36, we can see that our model is better in relevance compared to others. In terms of readability, there was no noticeable difference.
<<</Human Evaluation>>>
<<</CNN/Daily Mail>>>
<<<New York Times corpus>>>
Table TABREF38 gives the results on NYT50 dataset. We see our BERT-ext + abs + RL outperforms all the extractive and abstractive models, except ROUGE-1 from BIBREF1. Comparing with two recent models that adapted BERT on their summarization models BIBREF1, BIBREF4, we can say that we proposed another method successfully leveraging BERT for summarization. In addition, the experiment proves the effectiveness of our RL training, with about 2 point improvement for each ROUGE metric.
<<</New York Times corpus>>>
<<<DUC-2002>>>
We also evaluated the models trained on the CNN/Daily Mail dataset on the out-of-domain DUC-2002 test set as shown in Table TABREF41. BERT-ext + abs + RL outperforms baseline models with large margins on all of the ROUGE scores. This result shows that our model generalizes better.
<<</DUC-2002>>>
<<</Results>>>
<<<Related Work>>>
There has been a variety of deep neural network models for abstractive document summarization. One of the most dominant structures is the sequence-to-sequence (seq2seq) models with attention mechanism BIBREF37, BIBREF38, BIBREF39. BIBREF25 introduced Pointer Generator network that implicitly combines the abstraction with the extraction, using copy mechanism BIBREF40, BIBREF41. More recently, there have been several studies that have attempted to improve the performance of the abstractive summarization by explicitly combining them with extractive models. Some notable examples include the use of inconsistency loss BIBREF9, key phrase extraction BIBREF42, BIBREF10, and sentence extraction with rewriting BIBREF8. Our model improves Sentence Rewriting with BERT as an extractor and summary-level rewards to optimize the extractor.
Reinforcement learning has been shown to be effective to directly optimize a non-differentiable objective in language generation including text summarization BIBREF43, BIBREF27, BIBREF35, BIBREF44, BIBREF11. BIBREF27 use actor-critic methods for language generation, using reward shaping BIBREF12 to solve the sparsity of training signals. Inspired by this, we generalize it to sentence extraction to give per step reward preserving optimality.
<<</Related Work>>>
<<<Conclusions>>>
We have improved Sentence Rewriting approaches for abstractive summarization, proposing a novel extractor architecture exploiting BERT and a novel training procedure which globally optimizes summary-level ROUGE metric. Our approach achieves the new state-of-the-art on both CNN/Daily Mail and New York Times datasets as well as much better generalization on DUC-2002 test set.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Abstract, Model"
],
"type": "disordered_section"
}
|
1909.00694
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Minimally Supervised Learning of Affective Events Using Discourse Relations
<<<Abstract>>>
Recognizing affective events that trigger positive or negative sentiment has a wide range of natural language processing applications but remains a challenging problem mainly because the polarity of an event is not necessarily predictable from its constituent words. In this paper, we propose to propagate affective polarity using discourse relations. Our method is simple and only requires a very small seed lexicon and a large raw corpus. Our experiments using Japanese data show that our method learns affective events effectively without manually labeled data. It also improves supervised learning results when labeled data are small.
<<</Abstract>>>
<<<Introduction>>>
Affective events BIBREF0 are events that typically affect people in positive or negative ways. For example, getting money and playing sports are usually positive to the experiencers; catching cold and losing one's wallet are negative. Understanding affective events is important to various natural language processing (NLP) applications such as dialogue systems BIBREF1, question-answering systems BIBREF2, and humor recognition BIBREF3. In this paper, we work on recognizing the polarity of an affective event that is represented by a score ranging from $-1$ (negative) to 1 (positive).
Learning affective events is challenging because, as the examples above suggest, the polarity of an event is not necessarily predictable from its constituent words. Combined with the unbounded combinatorial nature of language, the non-compositionality of affective polarity entails the need for large amounts of world knowledge, which can hardly be learned from small annotated data.
In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event.
We trained the models using a Japanese web corpus. Given the minimum amount of supervision, they performed well. In addition, the combination of annotated and unannotated data yielded a gain over a purely supervised baseline when labeled data were small.
<<</Introduction>>>
<<<Related Work>>>
Learning affective events is closely related to sentiment analysis. Whereas sentiment analysis usually focuses on the polarity of what are described (e.g., movies), we work on how people are typically affected by events. In sentiment analysis, much attention has been paid to compositionality. Word-level polarity BIBREF5, BIBREF6, BIBREF7 and the roles of negation and intensification BIBREF8, BIBREF6, BIBREF9 are among the most important topics. In contrast, we are more interested in recognizing the sentiment polarity of an event that pertains to commonsense knowledge (e.g., getting money and catching cold).
Label propagation from seed instances is a common approach to inducing sentiment polarities. While BIBREF5 and BIBREF10 worked on word- and phrase-level polarities, BIBREF0 dealt with event-level polarities. BIBREF5 and BIBREF10 linked instances using co-occurrence information and/or phrase-level coordinations (e.g., “$A$ and $B$” and “$A$ but $B$”). We shift our scope to event pairs that are more complex than phrase pairs, and consequently exploit discourse connectives as event-level counterparts of phrase-level conjunctions.
BIBREF0 constructed a network of events using word embedding-derived similarities. Compared with this method, our discourse relation-based linking of events is much simpler and more intuitive.
Some previous studies made use of document structure to understand the sentiment. BIBREF11 proposed a sentiment-specific pre-training strategy using unlabeled dialog data (tweet-reply pairs). BIBREF12 proposed a method of building a polarity-tagged corpus (ACP Corpus). They automatically gathered sentences that had positive or negative opinions utilizing HTML layout structures in addition to linguistic patterns. Our method depends only on raw texts and thus has wider applicability.
<<</Related Work>>>
<<<Proposed Method>>>
<<<Polarity Function>>>
Our goal is to learn the polarity function $p(x)$, which predicts the sentiment polarity score of an event $x$. We approximate $p(x)$ by a neural network with the following form:
${\rm Encoder}$ outputs a vector representation of the event $x$. ${\rm Linear}$ is a fully-connected layer and transforms the representation into a scalar. ${\rm tanh}$ is the hyperbolic tangent and transforms the scalar into a score ranging from $-1$ to 1. In Section SECREF21, we consider two specific implementations of ${\rm Encoder}$.
<<</Polarity Function>>>
<<<Discourse Relation-Based Event Pairs>>>
Our method requires a very small seed lexicon and a large raw corpus. We assume that we can automatically extract discourse-tagged event pairs, $(x_{i1}, x_{i2})$ ($i=1, \cdots $) from the raw corpus. We refer to $x_{i1}$ and $x_{i2}$ as former and latter events, respectively. As shown in Figure FIGREF1, we limit our scope to two discourse relations: Cause and Concession.
The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation. Based on the availability of scores and the types of discourse relations, we classify the extracted event pairs into the following three types.
<<<AL (Automatically Labeled Pairs)>>>
The seed lexicon matches (1) the latter event but (2) not the former event, and (3) their discourse relation type is Cause or Concession. If the discourse relation type is Cause, the former event is given the same score as the latter. Likewise, if the discourse relation type is Concession, the former event is given the opposite of the latter's score. They are used as reference scores during training.
<<</AL (Automatically Labeled Pairs)>>>
<<<CA (Cause Pairs)>>>
The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Cause. We assume the two events have the same polarities.
<<</CA (Cause Pairs)>>>
<<<CO (Concession Pairs)>>>
The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Concession. We assume the two events have the reversed polarities.
<<</CO (Concession Pairs)>>>
<<</Discourse Relation-Based Event Pairs>>>
<<<Loss Functions>>>
Using AL, CA, and CO data, we optimize the parameters of the polarity function $p(x)$. We define a loss function for each of the three types of event pairs and sum up the multiple loss functions.
We use mean squared error to construct loss functions. For the AL data, the loss function is defined as:
where $x_{i1}$ and $x_{i2}$ are the $i$-th pair of the AL data. $r_{i1}$ and $r_{i2}$ are the automatically-assigned scores of $x_{i1}$ and $x_{i2}$, respectively. $N_{\rm AL}$ is the total number of AL pairs, and $\lambda _{\rm AL}$ is a hyperparameter.
For the CA data, the loss function is defined as:
$y_{i1}$ and $y_{i2}$ are the $i$-th pair of the CA pairs. $N_{\rm CA}$ is the total number of CA pairs. $\lambda _{\rm CA}$ and $\mu $ are hyperparameters. The first term makes the scores of the two events closer while the second term prevents the scores from shrinking to zero.
The loss function for the CO data is defined analogously:
The difference is that the first term makes the scores of the two events distant from each other.
<<</Loss Functions>>>
<<</Proposed Method>>>
<<<Experiments>>>
<<<Dataset>>>
<<<AL, CA, and CO>>>
As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. To extract event pairs tagged with discourse relations, we used the Japanese dependency parser KNP and in-house postprocessing scripts BIBREF14. KNP used hand-written rules to segment each sentence into what we conventionally called clauses (mostly consecutive text chunks), each of which contained one main predicate. KNP also identified the discourse relations of event pairs if explicit discourse connectives BIBREF4 such as “ので” (because) and “のに” (in spite of) were present. We treated Cause/Reason (原因・理由) and Condition (条件) in the original tagset BIBREF15 as Cause and Concession (逆接) as Concession, respectively. Here is an example of event pair extraction.
. 重大な失敗を犯したので、仕事をクビになった。
Because [I] made a serious mistake, [I] got fired.
From this sentence, we extracted the event pair of “重大な失敗を犯す” ([I] make a serious mistake) and “仕事をクビになる” ([I] get fired), and tagged it with Cause.
We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16.
<<</AL, CA, and CO>>>
<<<ACP (ACP Corpus)>>>
We used the latest version of the ACP Corpus BIBREF12 for evaluation. It was used for (semi-)supervised training as well. Extracted from Japanese websites using HTML layouts and linguistic patterns, the dataset covered various genres. For example, the following two sentences were labeled positive and negative, respectively:
. 作業が楽だ。
The work is easy.
. 駐車場がない。
There is no parking lot.
Although the ACP corpus was originally constructed in the context of sentiment analysis, we found that it could roughly be regarded as a collection of affective events. We parsed each sentence and extracted the last clause in it. The train/dev/test split of the data is shown in Table TABREF19.
The objective function for supervised training is:
where $v_i$ is the $i$-th event, $R_i$ is the reference score of $v_i$, and $N_{\rm ACP}$ is the number of the events of the ACP Corpus.
To optimize the hyperparameters, we used the dev set of the ACP Corpus. For the evaluation, we used the test set of the ACP Corpus. The model output was classified as positive if $p(x) > 0$ and negative if $p(x) \le 0$.
<<</ACP (ACP Corpus)>>>
<<</Dataset>>>
<<<Model Configurations>>>
As for ${\rm Encoder}$, we compared two types of neural networks: BiGRU and BERT. GRU BIBREF16 is a recurrent neural network sequence encoder. BiGRU reads an input sequence forward and backward and the output is the concatenation of the final forward and backward hidden states.
BERT BIBREF17 is a pre-trained multi-layer bidirectional Transformer BIBREF18 encoder. Its output is the final hidden state corresponding to the special classification tag ([CLS]). For the details of ${\rm Encoder}$, see Sections SECREF30.
We trained the model with the following four combinations of the datasets: AL, AL+CA+CO (two proposed models), ACP (supervised), and ACP+AL+CA+CO (semi-supervised). The corresponding objective functions were: $\mathcal {L}_{\rm AL}$, $\mathcal {L}_{\rm AL} + \mathcal {L}_{\rm CA} + \mathcal {L}_{\rm CO}$, $\mathcal {L}_{\rm ACP}$, and $\mathcal {L}_{\rm ACP} + \mathcal {L}_{\rm AL} + \mathcal {L}_{\rm CA} + \mathcal {L}_{\rm CO}$.
<<</Model Configurations>>>
<<<Results and Discussion>>>
Table TABREF23 shows accuracy. As the Random baseline suggests, positive and negative labels were distributed evenly. The Random+Seed baseline made use of the seed lexicon and output the corresponding label (or the reverse of it for negation) if the event's predicate is in the seed lexicon. We can see that the seed lexicon itself had practically no impact on prediction.
The models in the top block performed considerably better than the random baselines. The performance gaps with their (semi-)supervised counterparts, shown in the middle block, were less than 7%. This demonstrates the effectiveness of discourse relation-based label propagation.
Comparing the model variants, we obtained the highest score with the BiGRU encoder trained with the AL+CA+CO dataset. BERT was competitive but its performance went down if CA and CO were used in addition to AL. We conjecture that BERT was more sensitive to noises found more frequently in CA and CO.
Contrary to our expectations, supervised models (ACP) outperformed semi-supervised models (ACP+AL+CA+CO). This suggests that the training set of 0.6 million events is sufficiently large for training the models. For comparison, we trained the models with a subset (6,000 events) of the ACP dataset. As the results shown in Table TABREF24 demonstrate, our method is effective when labeled data are small.
The result of hyperparameter optimization for the BiGRU encoder was as follows:
As the CA and CO pairs were equal in size (Table TABREF16), $\lambda _{\rm CA}$ and $\lambda _{\rm CO}$ were comparable values. $\lambda _{\rm CA}$ was about one-third of $\lambda _{\rm CO}$, and this indicated that the CA pairs were noisier than the CO pairs. A major type of CA pairs that violates our assumption was in the form of “$\textit {problem}_{\text{negative}}$ causes $\textit {solution}_{\text{positive}}$”:
. (悪いところがある, よくなるように努力する)
(there is a bad point, [I] try to improve [it])
The polarities of the two events were reversed in spite of the Cause relation, and this lowered the value of $\lambda _{\rm CA}$.
Some examples of model outputs are shown in Table TABREF26. The first two examples suggest that our model successfully learned negation without explicit supervision. Similarly, the next two examples differ only in voice but the model correctly recognized that they had opposite polarities. The last two examples share the predicate “落とす" (drop) and only the objects are different. The second event “肩を落とす" (lit. drop one's shoulders) is an idiom that expresses a disappointed feeling. The examples demonstrate that our model correctly learned non-compositional expressions.
<<</Results and Discussion>>>
<<</Experiments>>>
<<<Conclusion>>>
In this paper, we proposed to use discourse relations to effectively propagate polarities of affective events from seeds. Experiments show that, even with a minimal amount of supervision, the proposed method performed well.
Although event pairs linked by discourse analysis are shown to be useful, they nevertheless contain noises. Adding linguistically-motivated filtering rules would help improve the performance.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nProposed Method\nPolarity Function\nDiscourse Relation-Based Event Pairs\nAL (Automatically Labeled Pairs)\nCA (Cause Pairs)\nCO (Concession Pairs)\nLoss Functions\nExperiments\nDataset\nAL, CA, and CO\nACP (ACP Corpus)\nModel Configurations\nResults and Discussion\nConclusion"
],
"type": "outline"
}
|
1910.14497
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Probabilistic Bias Mitigation in Word Embeddings
<<<Abstract>>>
It has been shown that word embeddings derived from large corpora tend to incorporate biases present in their training data. Various methods for mitigating these biases have been proposed, but recent work has demonstrated that these methods hide but fail to truly remove the biases, which can still be observed in word nearest-neighbor statistics. In this work we propose a probabilistic view of word embedding bias. We leverage this framework to present a novel method for mitigating bias which relies on probabilistic observations to yield a more robust bias mitigation algorithm. We demonstrate that this method effectively reduces bias according to three separate measures of bias while maintaining embedding quality across various popular benchmark semantic tasks
<<</Abstract>>>
<<<Introduction>>>
Word embeddings, or vector representations of words, are an important component of Natural Language Processing (NLP) models and necessary for many downstream tasks. However, word embeddings, including embeddings commonly deployed for public use, have been shown to exhibit unwanted societal stereotypes and biases, raising concerns about disparate impact on axes of gender, race, ethnicity, and religion BIBREF0, BIBREF1. The impact of this bias has manifested in a range of downstream tasks, ranging from autocomplete suggestions BIBREF2 to advertisement delivery BIBREF3, increasing the likelihood of amplifying harmful biases through the use of these models.
The most well-established method thus far for mitigating bias relies on projecting target words onto a bias subspace (such as a gender subspace) and subtracting out the difference between the resulting distances BIBREF0. On the other hand, the most popular metric for measuring bias is the WEAT statistic BIBREF1, which compares the cosine similarities between groups of words. However, WEAT has been recently shown to overestimate bias as a result of implicitly relying on similar frequencies for the target words BIBREF4, and BIBREF5 demonstrated that evidence of bias can still be recovered after geometric bias mitigation by examining the neighborhood of a target word among socially-biased words.
In response to this, we propose an alternative framework for bias mitigation in word embeddings that approaches this problem from a probabilistic perspective. The motivation for this approach is two-fold. First, most popular word embedding algorithms are probabilistic at their core – i.e., they are trained (explicitly or implicitly BIBREF6) to minimize some form of word co-occurrence probabilities. Thus, we argue that a framework for measuring and treating bias in these embeddings should take into account, in addition to their geometric aspect, their probabilistic nature too. On the other hand, the issue of bias has also been approached (albeit in different contexts) in the fairness literature, where various intuitive notions of equity such as equalized odds have been formalized through probabilistic criteria. By considering analogous criteria for the word embedding setting, we seek to draw connections between these two bodies of work.
We present experiments on various bias mitigation benchmarks and show that our framework is comparable to state-of-the-art alternatives according to measures of geometric bias mitigation and that it performs far better according to measures of neighborhood bias. For fair comparison, we focus on mitigating a binary gender bias in pre-trained word embeddings using SGNS (skip-gram with negative-sampling), though we note that this framework and methods could be extended to other types of bias and word embedding algorithms.
<<</Introduction>>>
<<<Background>>>
<<<Geometric Bias Mitigation>>>
Geometric bias mitigation uses the cosine distances between words to both measure and remove gender bias BIBREF0. This method implicitly defines bias as a geometric asymmetry between words when projected onto a subspace, such as the gender subspace constructed from a set of gender pairs such as $\mathcal {P} = \lbrace (he,she),(man,woman),(king,queen)...\rbrace $. The projection of a vector $v$ onto $B$ (the subspace) is defined by $v_B = \sum _{j=1}^{k} (v \cdot b_j) b_j$ where a subspace $B$ is defined by k orthogonal unit vectors $B = {b_1,...,b_k}$.
<<<WEAT>>>
The WEAT statistic BIBREF1 demonstrates the presence of biases in word embeddings with an effect size defined as the mean test statistic across the two word sets:
Where $s$, the test statistic, is defined as: $s(w,A,B) = mean_{a \in A} cos(w,a) - mean_{b \in B} cos(w,a)$, and $X$,$Y$,$A$, and $B$ are groups of words for which the association is measured. Possible values range from $-2$ to 2 depending on the association of the words groups, and a value of zero indicates $X$ and $Y$ are equally associated with $A$ and $B$. See BIBREF4 for further details on WEAT.
<<</WEAT>>>
<<<RIPA>>>
The RIPA (relational inner product association) metric was developed as an alternative to WEAT, with the critique that WEAT is likely to overestimate the bias of a target attribute BIBREF4. The RIPA metric formalizes the measure of bias used in geometric bias mitigation as the inner product association of a word vector $v$ with respect to a relation vector $b$. The relation vector is constructed from the first principal component of the differences between gender word pairs. We report the absolute value of the RIPA metric as the value can be positive or negative according to the direction of the bias. A value of zero indicates a lack of bias, and the value is bound by $[-||w||,||w||]$.
<<</RIPA>>>
<<<Neighborhood Metric>>>
The neighborhood bias metric proposed by BIBREF5 quantifies bias as the proportion of male socially-biased words among the $k$ nearest socially-biased male and female neighboring words, whereby biased words are obtained by projecting neutral words onto a gender relation vector. As we only examine the target word among the 1000 most socially-biased words in the vocabulary (500 male and 500 female), a word’s bias is measured as the ratio of its neighborhood of socially-biased male and socially-biased female words, so that a value of 0.5 in this metric would indicate a perfectly unbiased word, and values closer to 0 and 1 indicate stronger bias.
<<</Neighborhood Metric>>>
<<</Geometric Bias Mitigation>>>
<<</Background>>>
<<<A Probabilistic Framework for Bias Mitigation>>>
Our objective here is to extend and complement the geometric notions of word embedding bias described in the previous section with an alternative, probabilistic, approach. Intuitively, we seek a notion of equality akin to that of demographic parity in the fairness literature, which requires that a decision or outcome be independent of a protected attribute such as gender. BIBREF7. Similarly, when considering a probabilistic definition of unbiased in word embeddings, we can consider the conditional probabilities of word pairs, ensuring for example that $p(doctor|man) \approx p(doctor|woman)$, and can extend this probabilistic framework to include the neighborhood of a target word, addressing the potential pitfalls of geometric bias mitigation.
Conveniently, most word embedding frameworks allow for immediate computation of the conditional probabilities $P(w|c)$. Here, we focus our attention on the Skip-Gram method with Negative Sampling (SGNS) of BIBREF8, although our framework can be equivalently instantiated for most other popular embedding methods, owing to their core similarities BIBREF6, BIBREF9. Leveraging this probabilistic nature, we construct a bias mitigation method in two steps, and examine each step as an independent method as well as the resulting composite method.
<<<Probabilistic Bias Mitigation>>>
This component of our bias mitigation framework seeks to enforce that the probability of prediction or outcome cannot depend on a protected class such as gender. We can formalize this intuitive goal through a loss function that penalizes the discrepancy between the conditional probabilities of a target word (i.e., one that should not be affected by the protected attribute) conditioned on two words describing the protected attribute (e.g., man and woman in the case of gender). That is, for every target word we seek to minimize:
where $\mathcal {P} = \lbrace (he,she),(man,woman),(king,queen), \dots \rbrace $ is a set of word pairs characterizing the protected attribute, akin to that used in previous work BIBREF0.
At this point, the specific form of the objective will depend on the type of word embeddings used. For our expample of SGNS, recall that this algorithm models the conditional probability of a target word given a context word as a function of the inner product of their representations. Though an exact method for calculating the conditional probability includes summing over conditional probability of all the words in the vocabulary, we can use the estimation of log conditional probability proposed by BIBREF8, i.e., $ \log p(w_O|w_I) \approx \log \sigma ({v^{\prime }_{wo}}^T v_{wI}) + \sum _{i=1}^{k} [\log {\sigma ({{-v^{\prime }_{wi}}^T v_{wI}})}] $.
<<</Probabilistic Bias Mitigation>>>
<<<Nearest Neighbor Bias Mitigation>>>
Based on observations by BIBREF5, we extend our method to consider the composition of the neighborhood of socially-gendered words of a target word. We note that bias in a word embedding depends not only on the relationship between a target word and explicitly gendered words like man and woman, but also between a target word and socially-biased male or female words. Bolukbasi et al BIBREF0 proposed a method for eliminating this kind of indirect bias through geometric bias mitigation, but it is shown to be ineffective by the neighborhood metric BIBREF5.
Instead, we extend our method of bias mitigation to account for this neighborhood effect. Specifically, we examine the conditional probabilities of a target word given the $k/2$ nearest neighbors from the male socially-biased words as well as given the $k/2$ female socially-biased words (in sorted order, from smallest to largest). The groups of socially-biased words are constructed as described in the neighborhood metric. If the word is unbiased according to the neighborhood metric, these probabilities should be comparable. We then use the following as our loss function:
where $m$ and $f$ represent the male and female neighbors sorted by distance to the target word $t$ (we use $L1$ distance).
<<</Nearest Neighbor Bias Mitigation>>>
<<</A Probabilistic Framework for Bias Mitigation>>>
<<<Experiments>>>
We evaluate our framework on fastText embeddings trained on Wikipedia (2017), UMBC webbase corpus and statmt.org news dataset (16B tokens) BIBREF11. For simplicity, only the first 22000 words are used in all embeddings, though preliminary results indicate the findings extend to the full corpus. For our novel methods of mitigating bias, a shallow neural network is used to adjust the embedding. The single layer of the model is an embedding layer with weights initialized to those of the original embedding. For the composite method, these weights are initialized to those of the embedding after probabilistic bias mitigation. A batch of word indices is fed into the model, which are then embedded and for which a loss value is calculated, allowing back-propagation to adjust the embeddings. For each of the models, a fixed number of iterations is used to prevent overfitting, which can eventually hurt performance on the embedding benchmarks (See Figure FIGREF12). We evaluated the embedding after 1000 iterations, and stopped training if performance on a benchmark decreased significantly.
We construct a list of candidate words to debias, taken from the words used in the WEAT gender bias statistics. Words in this list should be gender neutral, and are related to the topics of career, arts, science, math, family and professions (see appendix). We note that this list can easily be expanded to include a greater proportion of words in the corpus. For example, BIBREF4 suggested a method for identifying inappropriately gendered words using unsupervised learning.
We compare this method of bias mitigation with the no bias mitigation ("Orig"), geometric bias mitigation ("Geo"), the two pieces of our method alone ("Prob" and "KNN") and the composite method ("KNN+Prob"). We note that the composite method performs reasonably well according the the RIPA metric, and much better than traditional geometric bias mitigation according to the neighborhood metric, without significant performance loss according to the accepted benchmarks. To our knowledge this is the first bias mitigation method to perform reasonably both on both metrics.
<<</Experiments>>>
<<<Discussion>>>
We proposed a simple method of bias mitigation based on this probabilistic notions of fairness, and showed that it leads to promising results in various benchmark bias mitigation tasks. Future work should include considering a more rigorous definition and non-binary of bias and experimenting with various embedding algorithms and network architectures.
<<<Acknowledgements>>>
The authors would like to thank Tommi Jaakkola for stimulating discussions during the initial stages of this work.
<<</Acknowledgements>>>
<<</Discussion>>>
<<<Experiment Notes>>>
For Equation 4, as described in the original work, in regards to the k sample words $w_i$ is drawn from the corpus using the Unigram distribution raised to the 3/4 power.
For reference, the most male socially-biased words include words such as:’john’, ’jr’, ’mlb’, ’dick’, ’nfl’, ’cfl’, ’sgt’, ’abbot’, ’halfback’, ’jock’, ’mike’, ’joseph’,while the most female socially-biased words include words such as:’feminine’, ’marital’, ’tatiana’, ’pregnancy’, ’eva’, ’pageant’, ’distress’, ’cristina’, ’ida’, ’beauty’, ’sexuality’,’fertility’
<<</Experiment Notes>>>
<<<Professions>>>
'accountant', 'acquaintance', 'actor', 'actress', 'administrator', 'adventurer', 'advocate', 'aide', 'alderman', 'ambassador', 'analyst', 'anthropologist', 'archaeologist', 'archbishop', 'architect', 'artist', 'assassin', 'astronaut', 'astronomer', 'athlete', 'attorney', 'author', 'baker', 'banker', 'barber', 'baron', 'barrister', 'bartender', 'biologist', 'bishop', 'bodyguard', 'boss', 'boxer', 'broadcaster', 'broker', 'businessman', 'butcher', 'butler', 'captain', 'caretaker', 'carpenter', 'cartoonist', 'cellist', 'chancellor', 'chaplain', 'character', 'chef', 'chemist', 'choreographer', 'cinematographer', 'citizen', 'cleric', 'clerk', 'coach', 'collector', 'colonel', 'columnist', 'comedian', 'comic', 'commander', 'commentator', 'commissioner', 'composer', 'conductor', 'confesses', 'congressman', 'constable', 'consultant', 'cop', 'correspondent', 'counselor', 'critic', 'crusader', 'curator', 'dad', 'dancer', 'dean', 'dentist', 'deputy', 'detective', 'diplomat', 'director', 'doctor', 'drummer', 'economist', 'editor', 'educator', 'employee', 'entertainer', 'entrepreneur', 'envoy', 'evangelist', 'farmer', 'filmmaker', 'financier', 'fisherman', 'footballer', 'foreman', 'gangster', 'gardener', 'geologist', 'goalkeeper', 'guitarist', 'headmaster', 'historian', 'hooker', 'illustrator', 'industrialist', 'inspector', 'instructor', 'inventor', 'investigator', 'journalist', 'judge', 'jurist', 'landlord', 'lawyer', 'lecturer', 'legislator', 'librarian', 'lieutenant', 'lyricist', 'maestro', 'magician', 'magistrate', 'maid', 'manager', 'marshal', 'mathematician', 'mechanic', 'midfielder', 'minister', 'missionary', 'monk', 'musician', 'nanny', 'narrator', 'naturalist', 'novelist', 'nun', 'nurse', 'observer', 'officer', 'organist', 'painter', 'pastor', 'performer', 'philanthropist', 'philosopher', 'photographer', 'physician', 'physicist', 'pianist', 'planner', 'playwright', 'poet', 'policeman', 'politician', 'preacher', 'president', 'priest', 'principal', 'prisoner', 'professor', 'programmer', 'promoter', 'proprietor', 'prosecutor', 'protagonist', 'provost', 'psychiatrist', 'psychologist', 'rabbi', 'ranger', 'researcher', 'sailor', 'saint', 'salesman', 'saxophonist', 'scholar', 'scientist', 'screenwriter', 'sculptor', 'secretary', 'senator', 'sergeant', 'servant', 'singer', 'skipper', 'sociologist', 'soldier', 'solicitor', 'soloist', 'sportsman', 'statesman', 'steward', 'student', 'substitute', 'superintendent', 'surgeon', 'surveyor', 'swimmer', 'teacher', 'technician', 'teenager', 'therapist', 'trader', 'treasurer', 'trooper', 'trumpeter', 'tutor', 'tycoon', 'violinist', 'vocalist', 'waiter', 'waitress', 'warden', 'warrior', 'worker', 'wrestler', 'writer'
<<</Professions>>>
<<<WEAT Word Sets>>>
Words used for WEAT statistic, consisting of baseline bias tests and gender bias tests in the format X vs Y / A vs B
Flowers vs Insects / Pleasant vs Unpleasant
X: "aster", "clover", "hyacinth", "marigold", "poppy", "azalea", "crocus", "iris", "orchid", "rose", "bluebell", "daffodil", "lilac", "pansy", "tulip", "buttercup", "daisy", "lily", "peony", "violet", "carnation", "gladiola", "magnolia", "petunia", "zinnia"
Y: "ant", "caterpillar", "flea", "locust", "spider", "bedbug", "centipede", "fly", "maggot", "tarantula", "bee", "cockroach", "gnat", "mosquito", "termite", "beetle", "cricket", "hornet", "moth", "wasp", "blackfly", "dragonfly", "horsefly", "roach", "weevil"
A: "caress", "freedom", "health", "love", "peace", "cheer", "friend", "heaven", "loyal", "pleasure", "diamond", "gentle", "honest", "lucky", "rainbow", "diploma", "gift", "honor", "miracle", "sunrise", "family", "happy", "laughter", "paradise", "vacation"
B: "abuse", "crash", "filth", "murder", "sickness", "accident", "death", "grief", "poison", "stink", "assault", "disaster", "hatred", "pollute", "tragedy", "divorce", "jail", "poverty", "ugly", "cancer", "kill", "rotten", "vomit", "agony", "prison"
Instruments vs Weapons / Pleasant vs Unpleasant:
X: "bagpipe", "cello", "guitar", "lute", "trombone", "banjo", "clarinet", "harmonica", "mandolin", "trumpet", "bassoon", "drum", "harp", "oboe", "tuba", "bell", "fiddle", "harpsichord", "piano", "viola", "bongo", "flute", "horn", "saxophone", "violin"
Y: "arrow", "club", "gun", "missile", "spear", "ax", "dagger", "harpoon", "pistol", "sword", "blade", "dynamite", "hatchet", "rifle", "tank", "bomb", "firearm", "knife", "shotgun", "teargas", "cannon", "grenade", "mace", "slingshot", "whip"
A: "caress", "freedom", "health", "love", "peace", "cheer", "friend", "heaven", "loyal", "pleasure", "diamond", "gentle", "honest", "lucky", "rainbow", "diploma", "gift", "honor", "miracle", "sunrise", "family", "happy", "laughter", "paradise", "vacation"
B: "abuse", "crash", "filth", "murder", "sickness", "accident", "death", "grief", "poison", "stink", "assault", "disaster", "hatred", "pollute", "tragedy", "divorce", "jail", "poverty", "ugly", "cancer", "kill", "rotten", "vomit", "agony", "prison"
Male vs Female / Career vs Family:
X: "brother", "father", "uncle", "grandfather", "son", "he", "his", "him", "man", "himself", "men", "husband", "boy", "uncle", "nephew", "boyfriend", "king", "actor"
Y: "sister", "mother", "aunt", "grandmother", "daughter", "she", "hers", "her", "woman", "herself", "women", "wife", "aunt", "niece", "girlfriend", "queen", "actress"
A: "executive", "management", "professional", "corporation", "salary", "office", "business", "career", "industry", "company", "promotion", "profession", "CEO", "manager", "coworker", "entrepreneur"
B: "home", "parents", "children", "family", "cousins", "marriage", "wedding", "relatives", "grandparents", "grandchildren", "nurture", "child", "toddler", "infant", "teenager"
Math vs Art / Male vs Female:
X: "math", "algebra", "geometry", "calculus", "equations", "computation", "numbers", "addition", "trigonometry", "arithmetic", "logic", "proofs", "multiplication", "mathematics"
Y: "poetry", "art", "Shakespeare", "dance", "literature", "novel", "symphony", "drama", "orchestra", "music", "ballet", "arts", "creative", "sculpture"
A: "brother", "father", "uncle", "grandfather", "son", "he", "his", "him", "man", "himself", "men", "husband", "boy", "uncle", "nephew", "boyfriend", "king", "actor"
B: "sister", "mother", "aunt", "grandmother", "daughter", "she", "hers", "her", "woman", "herself", "women", "wife", "aunt", "niece", "girlfriend", "queen", "actress"
Science vs Art / Male8 vs Female8:
X:"science", "technology", "physics", "chemistry", "Einstein", "NASA", "experiment", "astronomy", "biology", "aeronautics", "mechanics", "thermodynamics"
Y: "poetry", "art", "Shakespeare", "dance", "literature", "novel", "symphony", "drama", "orchestra", "music", "ballet", "arts", "creative", "sculpture"
A: "brother", "father", "uncle", "grandfather", "son", "he", "his", "him", "man", "himself", "men", "husband", "boy", "uncle", "nephew", "boyfriend"
B: "sister", "mother", "aunt", "grandmother", "daughter", "she", "hers", "her", "woman", "herself", "women", "wife", "aunt", "niece", "girlfriend"
<<</WEAT Word Sets>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nBackground\nGeometric Bias Mitigation\nWEAT\nRIPA\nNeighborhood Metric\nA Probabilistic Framework for Bias Mitigation\nProbabilistic Bias Mitigation\nNearest Neighbor Bias Mitigation\nExperiments\nDiscussion\nAcknowledgements\nExperiment Notes\nProfessions\nWEAT Word Sets"
],
"type": "outline"
}
|
1912.02481
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Massive vs. Curated Word Embeddings for Low-Resourced Languages. The Case of Yor\`ub\'a and Twi
<<<Abstract>>>
The success of several architectures to learn semantic representations from unannotated text and the availability of these kind of texts in online multilingual resources such as Wikipedia has facilitated the massive and automatic creation of resources for multiple languages. The evaluation of such resources is usually done for the high-resourced languages, where one has a smorgasbord of tasks and test sets to evaluate on. For low-resourced languages, the evaluation is more difficult and normally ignored, with the hope that the impressive capability of deep learning architectures to learn (multilingual) representations in the high-resourced setting holds in the low-resourced setting too. In this paper we focus on two African languages, Yor\`ub\'a and Twi, and compare the word embeddings obtained in this way, with word embeddings obtained from curated corpora and a language-dependent processing. We analyse the noise in the publicly available corpora, collect high quality and noisy data for the two languages and quantify the improvements that depend not only on the amount of data but on the quality too. We also use different architectures that learn word representations both from surface forms and characters to further exploit all the available information which showed to be important for these languages. For the evaluation, we manually translate the wordsim-353 word pairs dataset from English into Yor\`ub\'a and Twi. As output of the work, we provide corpora, embeddings and the test suits for both languages.
<<</Abstract>>>
<<<Introduction>>>
In recent years, word embeddings BIBREF0, BIBREF1, BIBREF2 have been proven to be very useful for training downstream natural language processing (NLP) tasks. Moreover, contextualized embeddings BIBREF3, BIBREF4 have been shown to further improve the performance of NLP tasks such as named entity recognition, question answering, or text classification when used as word features because they are able to resolve ambiguities of word representations when they appear in different contexts. Different deep learning architectures such as multilingual BERT BIBREF4, LASER BIBREF5 and XLM BIBREF6 have proved successful in the multilingual setting. All these architectures learn the semantic representations from unannotated text, making them cheap given the availability of texts in online multilingual resources such as Wikipedia. However, the evaluation of such resources is usually done for the high-resourced languages, where one has a smorgasbord of tasks and test sets to evaluate on. This is the best-case scenario, languages with tones of data for training that generate high-quality models.
For low-resourced languages, the evaluation is more difficult and therefore normally ignored simply because of the lack of resources. In these cases, training data is scarce, and the assumption that the capability of deep learning architectures to learn (multilingual) representations in the high-resourced setting holds in the low-resourced one does not need to be true. In this work, we focus on two African languages, Yorùbá and Twi, and carry out several experiments to verify this claim. Just by a simple inspection of the word embeddings trained on Wikipedia by fastText, we see a high number of non-Yorùbá or non-Twi words in the vocabularies. For Twi, the vocabulary has only 935 words, and for Yorùbá we estimate that 135 k out of the 150 k words belong to other languages such as English, French and Arabic.
In order to improve the semantic representations for these languages, we collect online texts and study the influence of the quality and quantity of the data in the final models. We also examine the most appropriate architecture depending on the characteristics of each language. Finally, we translate test sets and annotate corpora to evaluate the performance of both our models together with fastText and BERT pre-trained embeddings which could not be evaluated otherwise for Yorùbá and Twi. The evaluation is carried out in a word similarity and relatedness task using the wordsim-353 test set, and in a named entity recognition (NER) task where embeddings play a crucial role. Of course, the evaluation of the models in only two tasks is not exhaustive but it is an indication of the quality we can obtain for these two low-resourced languages as compared to others such as English where these evaluations are already available.
The rest of the paper is organized as follows. Related works are reviewed in Section SECREF2 The two languages under study are described in Section SECREF3. We introduce the corpora and test sets in Section SECREF4. The fifth section explores the different training architectures we consider, and the experiments that are carried out. Finally, discussion and concluding remarks are given in Section SECREF6
<<</Introduction>>>
<<<Related Work>>>
The large amount of freely available text in the internet for multiple languages is facilitating the massive and automatic creation of multilingual resources. The resource par excellence is Wikipedia, an online encyclopedia currently available in 307 languages. Other initiatives such as Common Crawl or the Jehovah’s Witnesses site are also repositories for multilingual data, usually assumed to be noisier than Wikipedia. Word and contextual embeddings have been pre-trained on these data, so that the resources are nowadays at hand for more than 100 languages. Some examples include fastText word embeddings BIBREF2, BIBREF7, MUSE embeddings BIBREF8, BERT multilingual embeddings BIBREF4 and LASER sentence embeddings BIBREF5. In all cases, embeddings are trained either simultaneously for multiple languages, joining high- and low-resource data, or following the same methodology.
On the other hand, different approaches try to specifically design architectures to learn embeddings in a low-resourced setting. ChaudharyEtAl:2018 follow a transfer learning approach that uses phonemes, lemmas and morphological tags to transfer the knowledge from related high-resource language into the low-resource one. jiangEtal:2018 apply Positive-Unlabeled Learning for word embedding calculations, assuming that unobserved pairs of words in a corpus also convey information, and this is specially important for small corpora.
In order to assess the quality of word embeddings, word similarity and relatedness tasks are usually used. wordsim-353 BIBREF9 is a collection of 353 pairs annotated with semantic similarity scores in a scale from 0 to 10. Even the problems detected in this dataset BIBREF10, it is widely used by the community. The test set was originally created for English, but the need for comparison with other languages has motivated several translations/adaptations. In hassanMihalcea:2009 the test was translated manually into Spanish, Romanian and Arabic and the scores were adapted to reflect similarities in the new language. The reported correlation between the English scores and the Spanish ones is 0.86. Later, JoubarneInkpen:2011 show indications that the measures of similarity highly correlate across languages. leviantReichart:2015 translated also wordsim-353 into German, Italian and Russian and used crowdsourcing to score the pairs. Finally, jiangEtal:2018 translated with Google Cloud the test set from English into Czech, Danish and Dutch. In our work, native speakers translate wordsim-353 into Yorùbá and Twi, and similarity scores are kept unless the discrepancy with English is big (see Section SECREF11 for details). A similar approach to our work is done for Gujarati in JoshiEtAl:2019.
<<</Related Work>>>
<<<Languages under Study>>>
<<<Yorùbá>>>
is a language in the West Africa with over 50 million speakers. It is spoken among other languages in Nigeria, republic of Togo, Benin Republic, Ghana and Sierra Leon. It is also a language of Òrìsà in Cuba, Brazil, and some Caribbean countries. It is one of the three major languages in Nigeria and it is regarded as the third most spoken native African language. There are different dialects of Yorùbá in Nigeria BIBREF11, BIBREF12, BIBREF13. However, in this paper our focus is the standard Yorùbá based upon a report from the 1974 Joint Consultative Committee on Education BIBREF14.
Standard Yorùbá has 25 letters without the Latin characters c, q, v, x and z. There are 18 consonants (b, d, f, g, gb, j[dz], k, l, m, n, p[kp], r, s, ṣ, t, w y[j]), 7 oral vowels (a, e, ẹ, i, o, ọ, u), five nasal vowels, (an, $ \underaccent{\dot{}}{e}$n, in, $ \underaccent{\dot{}}{o}$n, un) and syllabic nasals (m̀, ḿ, ǹ, ń). Yorùbá is a tone language which makes heavy use of lexical tones which are indicated by the use of diacritics. There are three tones in Yorùbá namely low, mid and high which are represented as grave ($\setminus $), macron ($-$) and acute ($/$) symbols respectively. These tones are applied on vowels and syllabic nasals. Mid tone is usually left unmarked on vowels and every initial or first vowel in a word cannot have a high tone. It is important to note that tone information is needed for correct pronunciation and to have the meaning of a word BIBREF15, BIBREF12, BIBREF14. For example, owó (money), ọw (broom), òwò (business), w (honour), ọw (hand), and w (group) are different words with different dots and diacritic combinations. According to Asahiah2014, Standard Yorùbá uses 4 diacritics, 3 are for marking tones while the fourth which is the dot below is used to indicate the open phonetic variants of letter "e" and "o" and the long variant of "s". Also, there are 19 single diacritic letters, 3 are marked with dots below (ẹ, ọ, ṣ) while the rest are either having the grave or acute accent. The four double diacritics are divided between the grave and the acute accent as well.
As noted in Asahiah2014, most of the Yorùbá texts found in websites or public domain repositories (i) either use the correct Yorùbá orthography or (ii) replace diacritized characters with un-diacritized ones.
This happens as a result of many factors, but most especially to the unavailability of appropriate input devices for the accurate application of the diacritical marks BIBREF11. This has led to research on restoration models for diacritics BIBREF16, but the problem is not well solved and we find that most Yorùbá text in the public domain today is not well diacritized. Wikipedia is not an exception.
<<</Yorùbá>>>
<<<Twi>>>
is an Akan language of the Central Tano Branch of the Niger Congo family of languages. It is the most widely spoken of the about 80 indigenous languages in Ghana BIBREF17. It has about 9 million native speakers and about a total of 17–18 million Ghanaians have it as either first or second language. There are two mutually intelligible dialects, Asante and Akuapem, and sub-dialectical variants which are mostly unknown to and unnoticed by non-native speakers. It is also mutually intelligible with Fante and to a large extent Bono, another of the Akan languages. It is one of, if not the, easiest to learn to speak of the indigenous Ghanaian languages. The same is however not true when it comes to reading and especially writing. This is due to a number of easily overlooked complexities in the structure of the language. First of all, similarly to Yorùbá, Twi is a tonal language but written without diacritics or accents. As a result, words which are pronounced differently and unambiguous in speech tend to be ambiguous in writing. Besides, most of such words fit interchangeably in the same context and some of them can have more than two meanings. A simple example is:
Me papa aba nti na me ne wo redi no yie no. S wo ara wo nim s me papa ba a, me suban fofor adi.
This sentence could be translated as
(i) I'm only treating you nicely because I'm in a good mood. You already know I'm a completely different person when I'm in a good mood.
(ii) I'm only treating you nicely because my dad is around. You already know I'm a completely different person when my dad comes around.
Another characteristic of Twi is the fact that a good number of stop words have the same written form as content words. For instance, “na” or “na” could be the words “and, then”, the phrase “and then” or the word “mother”. This kind of ambiguity has consequences in several natural language applications where stop words are removed from text.
Finally, we want to point out that words can also be written with or without prefixes. An example is this same na and na which happen to be the same word with an omissible prefix across its multiple senses. For some words, the prefix characters are mostly used when the word begins a sentence and omitted in the middle. This however depends on the author/speaker. For the word embeddings calculation, this implies that one would have different embeddings for the same word found in different contexts.
<<</Twi>>>
<<</Languages under Study>>>
<<<Data>>>
We collect clean and noisy corpora for Yorùbá and Twi in order to quantify the effect of noise on the quality of the embeddings, where noisy has a different meaning depending on the language as it will be explained in the next subsections.
<<<Training Corpora>>>
For Yorùbá, we use several corpora collected by the Niger-Volta Language Technologies Institute with texts from different sources, including the Lagos-NWU conversational speech corpus, fully-diacritized Yorùbá language websites and an online Bible. The largest source with clean data is the JW300 corpus. We also created our own small-sized corpus by web-crawling three Yorùbá language websites (Alàkwé, r Yorùbá and Èdè Yorùbá Rẹw in Table TABREF7), some Yoruba Tweets with full diacritics and also news corpora (BBC Yorùbá and VON Yorùbá) with poor diacritics which we use to introduce noise. By noisy corpus, we refer to texts with incorrect diacritics (e.g in BBC Yorùbá), removal of tonal symbols (e.g in VON Yorùbá) and removal of all diacritics/under-dots (e.g some articles in Yorùbá Wikipedia). Furthermore, we got two manually typed fully-diacritized Yorùbá literature (Ìrìnkèrindò nínú igbó elégbèje and Igbó Olódùmarè) both written by Daniel Orowole Olorunfemi Fagunwa a popular Yorùbá author. The number of tokens available from each source, the link to the original source and the quality of the data is summarised in Table TABREF7.
The gathering of clean data in Twi is more difficult. We use as the base text as it has been shown that the Bible is the most available resource for low and endangered languages BIBREF18. This is the cleanest of all the text we could obtain. In addition, we use the available (and small) Wikipedia dumps which are quite noisy, i.e. Wikipedia contains a good number of English words, spelling errors and Twi sentences formulated in a non-natural way (formulated as L2 speakers would speak Twi as compared to native speakers). Lastly, we added text crawled from jw and the JW300 Twi corpus. Notice that the Bible text, is mainly written in the Asante dialect whilst the last, Jehovah's Witnesses, was written mainly in the Akuapem dialect. The Wikipedia text is a mixture of the two dialects. This introduces a lot of noise into the embeddings as the spelling of most words differs especially at the end of the words due to the mixture of dialects. The JW300 Twi corpus also contains mixed dialects but is mainly Akuampem. In this case, the noise comes also from spelling errors and the uncommon addition of diacritics which are not standardised on certain vowels. Figures for Twi corpora are summarised in the bottom block of Table TABREF7.
<<</Training Corpora>>>
<<<Evaluation Test Sets>>>
<<<Yorùbá.>>>
One of the contribution of this work is the introduction of the wordsim-353 word pairs dataset for Yorùbá. All the 353 word pairs were translated from English to Yorùbá by 3 native speakers. The set is composed of 446 unique English words, 348 of which can be expressed as one-word translation in Yorùbá (e.g. book translates to ìwé). In 61 cases (most countries and locations but also other content words) translations are transliterations (e.g. Doctor is dókítà and cucumber kùkúmbà.). 98 words were translated by short phrases instead of single words. This mostly affects words from science and technology (e.g. keyboard translates to pátákó ìtwé —literally meaning typing board—, laboratory translates to ìyàrá ìṣèwádìí —research room—, and ecology translates to ìm nípa àyíká while psychology translates to ìm nípa dá). Finally, 6 terms have the same form in English and Yorùbá therefore they are retained like that in the dataset (e.g. Jazz, Rock and acronyms such as FBI or OPEC).
We also annotate the Global Voices Yorùbá corpus to test the performance of our trained Yorùbá BERT embeddings on the named entity recognition task. The corpus consists of 25 k tokens which we annotate with four named entity types: DATE, location (LOC), organization (ORG) and personal names (PER). Any other token that does not belong to the four named entities is tagged with "O". The dataset is further split into training (70%), development (10%) and test (20%) partitions. Table TABREF12 shows the number of named entities per type and partition.
<<</Yorùbá.>>>
<<</Evaluation Test Sets>>>
<<</Data>>>
<<<Semantic Representations>>>
In this section, we describe the architectures used for learning word embeddings for the Twi and Yorùbá languages. Also, we discuss the quality of the embeddings as measured by the correlation with human judgements on the translated wordSim-353 test sets and by the F1 score in a NER task.
<<<Word Embeddings Architectures>>>
Modeling sub-word units has recently become a popular way to address out-of-vocabulary word problem in NLP especially in word representation learning BIBREF19, BIBREF2, BIBREF4. A sub-word unit can be a character, character $n$-grams, or heuristically learned Byte Pair Encodings (BPE) which work very well in practice especially for morphologically rich languages. Here, we consider two word embedding models that make use of character-level information together with word information: Character Word Embedding (CWE) BIBREF20 and fastText BIBREF2. Both of them are extensions of the Word2Vec architectures BIBREF0 that model sub-word units, character embeddings in the case of CWE and character $n$-grams for fastText.
CWE was introduced in 2015 to model the embeddings of characters jointly with words in order to address the issues of character ambiguities and non-compositional words especially in the Chinese language. A word or character embedding is learned in CWE using either CBOW or skipgram architectures, and then the final word embedding is computed by adding the character embeddings to the word itself:
where $w_j$ is the word embedding of $x_j$, $N_j$ is the number of characters in $x_j$, and $c_k$ is the embedding of the $k$-th character $c_k$ in $x_j$.
Similarly, in 2017 fastText was introduced as an extension to skipgram in order to take into account morphology and improve the representation of rare words. In this case the embedding of a word also includes the embeddings of its character $n$-grams:
where $w_j$ is the word embedding of $x_j$, $G_j$ is the number of character $n$-grams in $x_j$ and $g_k$ is the embedding of the $k$-th $n$-gram.
cwe also proposed three alternatives to learn multiple embeddings per character and resolve ambiguities: (i) position-based character embeddings where each character has different embeddings depending on the position it appears in a word, i.e., beginning, middle or end (ii) cluster-based character embeddings where a character can have $K$ different cluster embeddings, and (iii) position-based cluster embeddings (CWE-LP) where for each position $K$ different embeddings are learned. We use the latter in our experiments with CWE but no positional embeddings are used with fastText.
Finally, we consider a contextualized embedding architecture, BERT BIBREF4. BERT is a masked language model based on the highly efficient and parallelizable Transformer architecture BIBREF21 known to produce very rich contextualized representations for downstream NLP tasks.
The architecture is trained by jointly conditioning on both left and right contexts in all the transformer layers using two unsupervised objectives: Masked LM and Next-sentence prediction. The representation of a word is therefore learned according to the context it is found in.
Training contextual embeddings needs of huge amounts of corpora which are not available for low-resourced languages such as Yorùbá and Twi. However, Google provided pre-trained multilingual embeddings for 102 languages including Yorùbá (but not Twi).
<<</Word Embeddings Architectures>>>
<<<Experiments>>>
<<<FastText Training and Evaluation>>>
As a first experiment, we compare the quality of fastText embeddings trained on (high-quality) curated data and (low-quality) massively extracted data for Twi and Yorùbá languages.
Facebook released pre-trained word embeddings using fastText for 294 languages trained on Wikipedia BIBREF2 (F1 in tables) and for 157 languages trained on Wikipedia and Common Crawl BIBREF7 (F2). For Yorùbá, both versions are available but only embeddings trained on Wikipedia are available for Twi. We consider these embeddings the result of training on what we call massively-extracted corpora. Notice that training settings for both embeddings are not exactly the same, and differences in performance might come both from corpus size/quality but also from the background model. The 294-languages version is trained using skipgram, in dimension 300, with character $n$-grams of length 5, a window of size 5 and 5 negatives. The 157-languages version is trained using CBOW with position-weights, in dimension 300, with character $n$-grams of length 5, a window of size 5 and 10 negatives.
We want to compare the performance of these embeddings with the equivalent models that can be obtained by training on the different sources verified by native speakers of Twi and Yorùbá; what we call curated corpora and has been described in Section SECREF4 For the comparison, we define 3 datasets according to the quality and quantity of textual data used for training: (i) Curated Small Dataset (clean), C1, about 1.6 million tokens for Yorùbá and over 735 k tokens for Twi. The clean text for Twi is the Bible and for Yoruba all texts marked under the C1 column in Table TABREF7. (ii) In Curated Small Dataset (clean + noisy), C2, we add noise to the clean corpus (Wikipedia articles for Twi, and BBC Yorùbá news articles for Yorùbá). This increases the number of training tokens for Twi to 742 k tokens and Yorùbá to about 2 million tokens. (iii) Curated Large Dataset, C3 consists of all available texts we are able to crawl and source out for, either clean or noisy. The addition of JW300 BIBREF22 texts increases the vocabulary to more than 10 k tokens in both languages.
We train our fastText systems using a skipgram model with an embedding size of 300 dimensions, context window size of 5, 10 negatives and $n$-grams ranging from 3 to 6 characters similarly to the pre-trained models for both languages. Best results are obtained with minimum word count of 3.
Table TABREF15 shows the Spearman correlation between human judgements and cosine similarity scores on the wordSim-353 test set. Notice that pre-trained embeddings on Wikipedia show a very low correlation with humans on the similarity task for both languages ($\rho $=$0.14$) and their performance is even lower when Common Crawl is also considered ($\rho $=$0.07$ for Yorùbá). An important reason for the low performance is the limited vocabulary. The pre-trained Twi model has only 935 tokens. For Yorùbá, things are apparently better with more than 150 k tokens when both Wikipedia and Common Crawl are used but correlation is even lower. An inspection of the pre-trained embeddings indicates that over 135 k words belong to other languages mostly English, French and Arabic.
If we focus only on Wikipedia, we see that many texts are without diacritics in Yorùbá and often make use of mixed dialects and English sentences in Twi.
The Spearman $\rho $ correlation for fastText models on the curated small dataset (clean), C1, improves the baselines by a large margin ($\rho =0.354$ for Twi and 0.322 for Yorùbá) even with a small dataset. The improvement could be justified just by the larger vocabulary in Twi, but in the case of Yorùbá the enhancement is there with almost half of the vocabulary size. We found out that adding some noisy texts (C2 dataset) slightly improves the correlation for Twi language but not for the Yorùbá language. The Twi language benefits from Wikipedia articles because its inclusion doubles the vocabulary and reduces the bias of the model towards religious texts. However, for Yorùbá, noisy texts often ignore diacritics or tonal marks which increases the vocabulary size at the cost of an increment in the ambiguity too. As a result, the correlation is slightly hurt. One would expect that training with more data would improve the quality of the embeddings, but we found out with the results obtained with the C3 dataset, that only high-quality data helps. The addition of JW300 boosts the vocabulary in both cases, but whereas for Twi the corpus mixes dialects and is noisy, for Yorùbá it is very clean and with full diacritics. Consequently, the best embeddings for Yorùbá are obtained when training with the C3 dataset, whereas for Twi, C2 is the best option. In both cases, the curated embeddings improve the correlation with human judgements on the similarity task a $\Delta \rho =+0.25$ or, equivalently, by an increment on $\rho $ of 170% (Twi) and 180% (Yorùbá).
<<</FastText Training and Evaluation>>>
<<<CWE Training and Evaluation>>>
The huge ambiguity in the written Twi language motivates the exploration of different approaches to word embedding estimations. In this work, we compare the standard fastText methodology to include sub-word information with the character-enhanced approach with position-based clustered embeddings (CWE-LP as introduced in Section SECREF17). With the latter, we expect to specifically address the ambiguity present in a language that does not translate the different oral tones on vowels into the written language.
The character-enhanced word embeddings are trained using a skipgram architecture with cluster-based embeddings and an embedding size of 300 dimensions, context window-size of 5, and 5 negative samples. In this case, the best performance is obtained with a minimum word count of 1, and that increases the effective vocabulary that is used for training the embeddings with respect to the fastText experiments reported in Table TABREF15.
We repeat the same experiments as with fastText and summarise them in Table TABREF16. If we compare the relative numbers for the three datasets (C1, C2 and C3) we observe the same trends as before: the performance of the embeddings in the similarity task improves with the vocabulary size when the training data can be considered clean, but the performance diminishes when the data is noisy.
According to the results, CWE is specially beneficial for Twi but not always for Yorùbá. Clean Yorùbá text, does not have the ambiguity issues at character-level, therefore the $n$-gram approximation works better when enough clean data is used ($\rho ^{C3}_{CWE}=0.354$ vs. $\rho ^{C3}_{fastText}=0.391$) but it does not when too much noisy data (no diacritics, therefore character-level information would be needed) is used ($\rho ^{C2}_{CWE}=0.345$ vs. $\rho ^{C2}_{fastText}=0.302$). For Twi, the character-level information reinforces the benefits of clean data and the best correlation with human judgements is reached with CWE embeddings ($\rho ^{C2}_{CWE}=0.437$ vs. $\rho ^{C2}_{fastText}=0.388$).
<<</CWE Training and Evaluation>>>
<<<BERT Evaluation on NER Task>>>
In order to go beyond the similarity task using static word vectors, we also investigate the quality of the multilingual BERT embeddings by fine-tuning a named entity recognition task on the Yorùbá Global Voices corpus.
One of the major advantages of pre-trained BERT embeddings is that fine-tuning of the model on downstream NLP tasks is typically computationally inexpensive, often with few number of epochs. However, the data the embeddings are trained on has the same limitations as that used in massive word embeddings. Fine-tuning involves replacing the last layer of BERT used optimizing the masked LM with a task-dependent linear classifier or any other deep learning architecture, and training all the model parameters end-to-end. For the NER task, we obtain the token-level representation from BERT and train a linear classifier for sequence tagging.
Similar to our observations with non-contextualized embeddings, we find out that fine-tuning the pre-trained multilingual-uncased BERT for 4 epochs on the NER task gives an F1 score of 0. If we do the same experiment in English, F1 is 58.1 after 4 epochs.
That shows how pre-trained embeddings by themselves do not perform well in downstream tasks on low-resource languages. To address this problem for Yorùbá, we fine-tune BERT representations on the Yorùbá corpus in two ways: (i) using the multilingual vocabulary, and (ii) using only Yorùbá vocabulary. In both cases diacritics are ignored to be consistent with the base model training.
As expected, the fine-tuning of the pre-trained BERT on the Yorùbá corpus in the two configurations generates better representations than the base model. These models are able to achieve a better performance on the NER task with an average F1 score of over 47% (see Table TABREF26 for the comparative). The fine-tuned BERT model with only Yorùbá vocabulary further increases by more than 4% in F1 score obtained with the tuning that uses the multilingual vocabulary. Although we do not have enough data to train BERT from scratch, we observe that fine-tuning BERT on a limited amount of monolingual data of a low-resource language helps to improve the quality of the embeddings. The same observation holds true for high-resource languages like German and French BIBREF23.
<<</BERT Evaluation on NER Task>>>
<<</Experiments>>>
<<</Semantic Representations>>>
<<<Summary and Discussion>>>
In this paper, we present curated word and contextual embeddings for Yorùbá and Twi. For this purpose, we gather and select corpora and study the most appropriate techniques for the languages. We also create test sets for the evaluation of the word embeddings within a word similarity task (wordsim353) and the contextual embeddings within a NER task. Corpora, embeddings and test sets are available in github.
In our analysis, we show how massively generated embeddings perform poorly for low-resourced languages as compared to the performance for high-resourced ones. This is due both to the quantity but also the quality of the data used. While the Pearson $\rho $ correlation for English obtained with fastText embeddings trained on Wikipedia (WP) and Common Crawl (CC) are $\rho _{WP}$=$0.67$ and $\rho _{WP+CC}$=$0.78$, the equivalent ones for Yorùbá are $\rho _{WP}$=$0.14$ and $\rho _{WP+CC}$=$0.07$. For Twi, only embeddings with Wikipedia are available ($\rho _{WP}$=$0.14$). By carefully gathering high-quality data and optimising the models to the characteristics of each language, we deliver embeddings with correlations of $\rho $=$0.39$ (Yorùbá) and $\rho $=$0.44$ (Twi) on the same test set, still far from the high-resourced models, but representing an improvement over $170\%$ on the task.
In a low-resourced setting, the data quality, processing and model selection is more critical than in a high-resourced scenario. We show how the characteristics of a language (such as diacritization in our case) should be taken into account in order to choose the relevant data and model to use. As an example, Twi word embeddings are significantly better when training on 742 k selected tokens than on 16 million noisy tokens, and when using a model that takes into account single character information (CWE-LP) instead of $n$-gram information (fastText).
Finally, we want to note that, even within a corpus, the quality of the data might depend on the language. Wikipedia is usually used as a high-quality freely available multilingual corpus as compared to noisier data such as Common Crawl. However, for the two languages under study, Wikipedia resulted to have too much noise: interference from other languages, text clearly written by non-native speakers, lack of diacritics and mixture of dialects. The JW300 corpus on the other hand, has been rated as high-quality by our native Yorùbá speakers, but as noisy by our native Twi speakers. In both cases, experiments confirm the conclusions.
<<</Summary and Discussion>>>
<<<Acknowledgements>>>
The authors thank Dr. Clement Odoje of the Department of Linguistics and African Languages, University of Ibadan, Nigeria and Olóyè Gbémisóyè Àrdèó for helping us with the Yorùbá translation of the WordSim-353 word pairs and Dr. Felix Y. Adu-Gyamfi and Ps. Isaac Sarfo for helping with the Twi translation. We also thank the members of the Niger-Volta Language Technologies Institute for providing us with clean Yorùbá corpus
The project on which this paper is based was partially funded by the German Federal Ministry of Education and Research under the funding code 01IW17001 (Deeplee). Responsibility for the content of this publication is with the authors.
<<</Acknowledgements>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nLanguages under Study\nYorùbá\nTwi\nData\nTraining Corpora\nEvaluation Test Sets\nYorùbá.\nSemantic Representations\nWord Embeddings Architectures\nExperiments\nFastText Training and Evaluation\nCWE Training and Evaluation\nBERT Evaluation on NER Task\nSummary and Discussion\nAcknowledgements"
],
"type": "outline"
}
|
2002.02224
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Citation Data of Czech Apex Courts
<<<Abstract>>>
In this paper, we introduce the citation data of the Czech apex courts (Supreme Court, Supreme Administrative Court and Constitutional Court). This dataset was automatically extracted from the corpus of texts of Czech court decisions - CzCDC 1.0. We obtained the citation data by building the natural language processing pipeline for extraction of the court decision identifiers. The pipeline included the (i) document segmentation model and the (ii) reference recognition model. Furthermore, the dataset was manually processed to achieve high-quality citation data as a base for subsequent qualitative and quantitative analyses. The dataset will be made available to the general public.
<<</Abstract>>>
<<<Introduction>>>
Analysis of the way court decisions refer to each other provides us with important insights into the decision-making process at courts. This is true both for the common law courts and for their counterparts in the countries belonging to the continental legal system. Citation data can be used for both qualitative and quantitative studies, casting light in the behavior of specific judges through document analysis or allowing complex studies into changing the nature of courts in transforming countries.
That being said, it is still difficult to create sufficiently large citation datasets to allow a complex research. In the case of the Czech Republic, it was difficult to obtain a relevant dataset of the court decisions of the apex courts (Supreme Court, Supreme Administrative Court and Constitutional Court). Due to its size, it is nearly impossible to extract the references manually. One has to reach out for an automation of such task. However, study of court decisions displayed many different ways that courts use to cite even decisions of their own, not to mention the decisions of other courts.The great diversity in citations led us to the use of means of the natural language processing for the recognition and the extraction of the citation data from court decisions of the Czech apex courts.
In this paper, we describe the tool ultimately used for the extraction of the references from the court decisions, together with a subsequent way of manual processing of the raw data to achieve a higher-quality dataset. Section SECREF2 maps the related work in the area of legal citation analysis (SectionSECREF1), reference recognition (Section SECREF2), text segmentation (Section SECREF4), and data availability (Section SECREF3). Section SECREF3 describes the method we used for the citation extraction, listing the individual models and the way we have combined these models into the NLP pipeline. Section SECREF4 presents results in the terms of evaluation of the performance of our pipeline, the statistics of the raw data, further manual processing and statistics of the final citation dataset. Section SECREF5 discusses limitations of our work and outlines the possible future development. Section SECREF6 concludes this paper.
<<</Introduction>>>
<<<Related work>>>
<<<Legal Citation Analysis>>>
The legal citation analysis is an emerging phenomenon in the field of the legal theory and the legal empirical research.The legal citation analysis employs tools provided by the field of network analysis.
In spite of the long-term use of the citations in the legal domain (eg. the use of Shepard's Citations since 1873), interest in the network citation analysis increased significantly when Fowler et al. published the two pivotal works on the case law citations by the Supreme Court of the United States BIBREF0, BIBREF1. Authors used the citation data and network analysis to test the hypotheses about the function of stare decisis the doctrine and other issues of legal precedents. In the continental legal system, this work was followed by Winkels and de Ruyter BIBREF2. Authors adopted similar approach to Fowler to the court decisions of the Dutch Supreme Court. Similar methods were later used by Derlén and Lindholm BIBREF3, BIBREF4 and Panagis and Šadl BIBREF5 for the citation data of the Court of Justice of the European Union, and by Olsen and Küçüksu for the citation data of the European Court of Human Rights BIBREF6.
Additionally, a minor part in research in the legal network analysis resulted in the past in practical tools designed to help lawyers conduct the case law research. Kuppevelt and van Dijck built prototypes employing these techniques in the Netherlands BIBREF7. Görög a Weisz introduced the new legal information retrieval system, Justeus, based on a large database of the legal sources and partly on the network analysis methods. BIBREF8
<<</Legal Citation Analysis>>>
<<<Reference Recognition>>>
The area of reference recognition already contains a large amount of work. It is concerned with recognizing text spans in documents that are referring to other documents. As such, it is a classical topic within the AI & Law literature.
The extraction of references from the Italian legislation based on regular expressions was reported by Palmirani et al. BIBREF9. The main goal was to bring references under a set of common standards to ensure the interoperability between different legal information systems.
De Maat et al. BIBREF10 focused on an automated detection of references to legal acts in Dutch language. Their approach consisted of a grammar covering increasingly complex citation patterns.
Opijnen BIBREF11 aimed for a reference recognition and a reference standardization using regular expressions accounting for multiple the variant of the same reference and multiple vendor-specific identifiers.
The language specific work by Kríž et al. BIBREF12 focused on the detecting and classification references to other court decisions and legal acts. Authors used a statistical recognition (HMM and Perceptron algorithms) and reported F1-measure over 90% averaged over all entities. It is the state-of-art in the automatic recognition of references in the Czech court decisions. Unfortunately, it allows only for the detection of docket numbers and it is unable to recognize court-specific or vendor-specific identifiers in the court decisions.
Other language specific-work includes our previous reference recognition model presented in BIBREF13. Prediction model is based on conditional random fields and it allows recognition of different constituents which then establish both explicit and implicit case-law and doctrinal references. Parts of this model were used in the pipeline described further within this paper in Section SECREF3.
<<</Reference Recognition>>>
<<<Data Availability>>>
Large scale quantitative and qualitative studies are often hindered by the unavailability of court data. Access to court decisions is often hindered by different obstacles. In some countries, court decisions are not available at all, while in some other they are accessible only through legal information systems, often proprietary. This effectively restricts the access to court decisions in terms of the bulk data. This issue was already approached by many researchers either through making available selected data for computational linguistics studies or by making available datasets of digitized data for various purposes. Non-exhaustive list of publicly available corpora includes British Law Report Corpus BIBREF14, The Corpus of US Supreme Court Opinions BIBREF15,the HOLJ corpus BIBREF16, the Corpus of Historical English Law Reports, Corpus de Sentencias Penales BIBREF17, Juristisches Referenzkorpus BIBREF18 and many others.
Language specific work in this area is presented by the publicly available Czech Court Decisions Corpus (CzCDC 1.0) BIBREF19. This corpus contains majority of court decisions of the Czech Supreme Court, the Supreme Administrative Court and the Constitutional Court, hence allowing a large-scale extraction of references to yield representative results. The CzCDC 1.0 was used as a dataset for extraction of the references as is described further within this paper in Section SECREF3. Unfortunately, despite containing 237 723 court decisions issued between 1st January 1993 and 30th September 2018, it is not complete. This fact is reflected in the analysis of the results.
<<</Data Availability>>>
<<<Document Segmentation>>>
A large volume of legal information is available in unstructured form, which makes processing these data a challenging task – both for human lawyers and for computers. Schweighofer BIBREF20 called for generic tools allowing a document segmentation to ease the processing of unstructured data by giving them some structure.
Topic-based segmentation often focuses on the identifying specific sentences that present borderlines of different textual segments.
The automatic segmentation is not an individual goal – it always serves as a prerequisite for further tasks requiring structured data. Segmentation is required for the text summarization BIBREF21, BIBREF22, keyword extraction BIBREF23, textual information retrieval BIBREF24, and other applications requiring input in the form of structured data.
Major part of research is focused on semantic similarity methods.The computing similarity between the parts of text presumes that a decrease of similarity means a topical border of two text segments. This approach was introduced by Hearst BIBREF22 and was used by Choi BIBREF25 and Heinonen BIBREF26 as well.
Another approach takes word frequencies and presumes a border according to different key words extracted. Reynar BIBREF27 authored graphical method based on statistics called dotplotting. Similar techniques were used by Ye BIBREF28 or Saravanan BIBREF29. Bommarito et al. BIBREF30 introduced a Python library combining different features including pre-trained models to the use for automatic legal text segmentation. Li BIBREF31 included neural network into his method to segment Chinese legal texts.
Šavelka and Ashley BIBREF32 similarly introduced the machine learning based approach for the segmentation of US court decisions texts into seven different parts. Authors reached high success rates in recognizing especially the Introduction and Analysis parts of the decisions.
Language specific work includes the model presented by Harašta et al. BIBREF33. This work focuses on segmentation of the Czech court decisions into pre-defined topical segments. Parts of this segmentation model were used in the pipeline described further within this paper in Section SECREF3.
<<</Document Segmentation>>>
<<</Related work>>>
<<<Methodology>>>
In this paper, we present and describe the citation dataset of the Czech top-tier courts. To obtain this dataset, we have processed the court decisions contained in CzCDC 1.0 dataset by the NLP pipeline consisting of the segmentation model introduced in BIBREF33, and parts of the reference recognition model presented in BIBREF13. The process is described in this section.
<<<Dataset and models>>>
<<<CzCDC 1.0 dataset>>>
Novotná and Harašta BIBREF19 prepared a dataset of the court decisions of the Czech Supreme Court, the Supreme Administrative Court and the Constitutional Court. The dataset contains 237,723 decisions published between 1st January 1993 and the 30th September 2018. These decisions are organised into three sub-corpora. The sub-corpus of the Supreme Court contains 111,977 decisions, the sub-corpus of the Supreme Administrative Court contains 52,660 decisions and the sub-corpus of the Constitutional Court contains 73,086 decisions. Authors in BIBREF19 assessed that the CzCDC currently contains approximately 91% of all decisions of the Supreme Court, 99,5% of all decisions of the Constitutional Court, and 99,9% of all decisions of the Supreme Administrative Court. As such, it presents the best currently available dataset of the Czech top-tier court decisions.
<<</CzCDC 1.0 dataset>>>
<<<Reference recognition model>>>
Harašta and Šavelka BIBREF13 introduced a reference recognition model trained specifically for the Czech top-tier courts. Moreover, authors made their training data available in the BIBREF34. Given the lack of a single citation standard, references in this work consist of smaller units, because these were identified as more uniform and therefore better suited for the automatic detection. The model was trained using conditional random fields, which is a random field model that is globally conditioned on an observation sequence O. The states of the model correspond to event labels E. Authors used a first-order conditional random fields. Model was trained for each type of the smaller unit independently.
<<</Reference recognition model>>>
<<<Text segmentation model>>>
Harašta et al. BIBREF33, authors introduced the model for the automatic segmentation of the Czech court decisions into pre-defined multi-paragraph parts. These segments include the Header (introduction of given case), History (procedural history prior the apex court proceeding), Submission/Rejoinder (petition of plaintiff and response of defendant), Argumentation (argumentation of the court hearing the case), Footer (legally required information, such as information about further proceedings), Dissent and Footnotes. The model for automatic segmentation of the text was trained using conditional random fields. The model was trained for each type independently.
<<</Text segmentation model>>>
<<</Dataset and models>>>
<<<Pipeline>>>
In order to obtain the citation data of the Czech apex courts, it was necessary to recognize and extract the references from the CzCDC 1.0. Given that training data for both the reference recognition model BIBREF13, BIBREF34 and the text segmentation model BIBREF33 are publicly available, we were able to conduct extensive error analysis and put together a pipeline to arguably achieve the maximum efficiency in the task. The pipeline described in this part is graphically represented in Figure FIGREF10.
As the first step, every document in the CzCDC 1.0 was segmented using the text segmentation model. This allowed us to treat different parts of processed court documents differently in the further text processing. Specifically, it allowed us to subject only the specific part of a court decision, in this case the court argumentation, to further the reference recognition and extraction. A textual segment recognised as the court argumentation is then processed further.
As the second step, parts recognised by the text segmentation model as a court argumentation was processed using the reference recognition model. After carefully studying the evaluation of the model's performance in BIBREF13, we have decided to use only part of the said model. Specifically, we have employed the recognition of the court identifiers, as we consider the rest of the smaller units introduced by Harašta and Šavelka of a lesser value for our task. Also, deploying only the recognition of the court identifiers allowed us to avoid the problematic parsing of smaller textual units into the references. The text spans recognised as identifiers of court decisions are then processed further.
At this point, it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further. The evaluation of the performance is summarised in Table TABREF11. It shows that organising the two models into the pipeline boosted the performance of the reference recognition model, leading to a higher F1 measure in the initial recognition of the text spans and their classification.
Further processing included:
control and repair of incompletely identified court identifiers (manual);
identification and sorting of identifiers as belonging to Supreme Court, Supreme Administrative Court or Constitutional Court (rule-based, manual);
standardisation of different types of court identifiers (rule-based, manual);
parsing of identifiers with court decisions available in CzCDC 1.0.
<<</Pipeline>>>
<<</Methodology>>>
<<<Results>>>
Overall, through the process described in Section SECREF3, we have retrieved three datasets of extracted references - one dataset per each of the apex courts. These datasets consist of the individual pairs containing the identification of the decision from which the reference was retrieved, and the identification of the referred documents. As we only extracted references to other judicial decisions, we obtained 471,319 references from Supreme Court decisions, 167,237 references from Supreme Administrative Court decisions and 264,463 references from Constitutional Court Decisions. These are numbers of text spans identified as references prior the further processing described in Section SECREF3.
These references include all identifiers extracted from the court decisions contained in the CzCDC 1.0. Therefore, this number includes all other court decisions, including lower courts, the Court of Justice of the European Union, the European Court of Human Rights, decisions of other public authorities etc. Therefore, it was necessary to classify these into references referring to decisions of the Supreme Court, Supreme Administrative Court, Constitutional Court and others. These groups then underwent a standardisation - or more precisely a resolution - of different court identifiers used by the Czech courts. Numbers of the references resulting from this step are shown in Table TABREF16.
Following this step, we linked court identifiers with court decisions contained in the CzCDC 1.0. Given that, the CzCDC 1.0 does not contain all the decisions of the respective courts, we were not able to parse all the references. Numbers of the references resulting from this step are shown in Table TABREF17.
<<</Results>>>
<<<Discussion>>>
This paper introduced the first dataset of citation data of the three Czech apex courts. Understandably, there are some pitfalls and limitations to our approach.
As we admitted in the evaluation in Section SECREF9, the models we included in our NLP pipelines are far from perfect. Overall, we were able to achieve a reasonable recall and precision rate, which was further enhanced by several round of manual processing of the resulting data. However, it is safe to say that we did not manage to extract all the references. Similarly, because the CzCDC 1.0 dataset we used does not contain all the decisions of the respective courts, we were not able to parse all court identifiers to the documents these refer to. Therefore, the future work in this area may include further development of the resources we used. The CzCDC 1.0 would benefit from the inclusion of more documents of the Supreme Court, the reference recognition model would benefit from more refined training methods etc.
That being said, the presented dataset is currently the only available resource of its kind focusing on the Czech court decisions that is freely available to research teams. This significantly reduces the costs necessary to conduct these types of studies involving network analysis, and the similar techniques requiring a large amount of citation data.
<<</Discussion>>>
<<<Conclusion>>>
In this paper, we have described the process of the creation of the first dataset of citation data of the three Czech apex courts. The dataset is publicly available for download at https://github.com/czech-case-law-relevance/czech-court-citations-dataset.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated work\nLegal Citation Analysis\nReference Recognition\nData Availability\nDocument Segmentation\nMethodology\nDataset and models\nCzCDC 1.0 dataset\nReference recognition model\nText segmentation model\nPipeline\nResults\nDiscussion\nConclusion"
],
"type": "outline"
}
|
2003.06651
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Word Sense Disambiguation for 158 Languages using Word Embeddings Only
<<<Abstract>>>
Disambiguation of word senses in context is easy for humans, but is a major challenge for automatic approaches. Sophisticated supervised and knowledge-based models were developed to solve this task. However, (i) the inherent Zipfian distribution of supervised training instances for a given word and/or (ii) the quality of linguistic knowledge representations motivate the development of completely unsupervised and knowledge-free approaches to word sense disambiguation (WSD). They are particularly useful for under-resourced languages which do not have any resources for building either supervised and/or knowledge-based models. In this paper, we present a method that takes as input a standard pre-trained word embedding model and induces a fully-fledged word sense inventory, which can be used for disambiguation in context. We use this method to induce a collection of sense inventories for 158 languages on the basis of the original pre-trained fastText word embeddings by Grave et al. (2018), enabling WSD in these languages. Models and system are available online.
<<</Abstract>>>
<<<>>>
1.1em
<<</>>>
<<<Introduction>>>
There are many polysemous words in virtually any language. If not treated as such, they can hamper the performance of all semantic NLP tasks BIBREF0. Therefore, the task of resolving the polysemy and choosing the most appropriate meaning of a word in context has been an important NLP task for a long time. It is usually referred to as Word Sense Disambiguation (WSD) and aims at assigning meaning to a word in context.
The majority of approaches to WSD are based on the use of knowledge bases, taxonomies, and other external manually built resources BIBREF1, BIBREF2. However, different senses of a polysemous word occur in very diverse contexts and can potentially be discriminated with their help. The fact that semantically related words occur in similar contexts, and diverse words do not share common contexts, is known as distributional hypothesis and underlies the technique of constructing word embeddings from unlabelled texts. The same intuition can be used to discriminate between different senses of individual words. There exist methods of training word embeddings that can detect polysemous words and assign them different vectors depending on their contexts BIBREF3, BIBREF4. Unfortunately, many wide-spread word embedding models, such as GloVe BIBREF5, word2vec BIBREF6, fastText BIBREF7, do not handle polysemous words. Words in these models are represented with single vectors, which were constructed from diverse sets of contexts corresponding to different senses. In such cases, their disambiguation needs knowledge-rich approaches.
We tackle this problem by suggesting a method of post-hoc unsupervised WSD. It does not require any external knowledge and can separate different senses of a polysemous word using only the information encoded in pre-trained word embeddings. We construct a semantic similarity graph for words and partition it into densely connected subgraphs. This partition allows for separating different senses of polysemous words. Thus, the only language resource we need is a large unlabelled text corpus used to train embeddings. This makes our method applicable to under-resourced languages. Moreover, while other methods of unsupervised WSD need to train embeddings from scratch, we perform retrofitting of sense vectors based on existing word embeddings.
We create a massively multilingual application for on-the-fly word sense disambiguation. When receiving a text, the system identifies its language and performs disambiguation of all the polysemous words in it based on pre-extracted word sense inventories. The system works for 158 languages, for which pre-trained fastText embeddings available BIBREF8. The created inventories are based on these embeddings. To the best of our knowledge, our system is the only WSD system for the majority of the presented languages. Although it does not match the state of the art for resource-rich languages, it is fully unsupervised and can be used for virtually any language.
The contributions of our work are the following:
[noitemsep]
We release word sense inventories associated with fastText embeddings for 158 languages.
We release a system that allows on-the-fly word sense disambiguation for 158 languages.
We present egvi (Ego-Graph Vector Induction), a new algorithm of unsupervised word sense induction, which creates sense inventories based on pre-trained word vectors.
<<</Introduction>>>
<<<Related Work>>>
There are two main scenarios for WSD: the supervised approach that leverages training corpora explicitly labelled for word sense, and the knowledge-based approach that derives sense representation from lexical resources, such as WordNet BIBREF9. In the supervised case WSD can be treated as a classification problem. Knowledge-based approaches construct sense embeddings, i.e. embeddings that separate various word senses.
SupWSD BIBREF10 is a state-of-the-art system for supervised WSD. It makes use of linear classifiers and a number of features such as POS tags, surrounding words, local collocations, word embeddings, and syntactic relations. GlossBERT model BIBREF11, which is another implementation of supervised WSD, achieves a significant improvement by leveraging gloss information. This model benefits from sentence-pair classification approach, introduced by Devlin:19 in their BERT contextualized embedding model. The input to the model consists of a context (a sentence which contains an ambiguous word) and a gloss (sense definition) from WordNet. The context-gloss pair is concatenated through a special token ([SEP]) and classified as positive or negative.
On the other hand, sense embeddings are an alternative to traditional word vector models such as word2vec, fastText or GloVe, which represent monosemous words well but fail for ambiguous words. Sense embeddings represent individual senses of polysemous words as separate vectors. They can be linked to an explicit inventory BIBREF12 or induce a sense inventory from unlabelled data BIBREF13. LSTMEmbed BIBREF13 aims at learning sense embeddings linked to BabelNet BIBREF14, at the same time handling word ordering, and using pre-trained embeddings as an objective. Although it was tested only on English, the approach can be easily adapted to other languages present in BabelNet. However, manually labelled datasets as well as knowledge bases exist only for a small number of well-resourced languages. Thus, to disambiguate polysemous words in other languages one has to resort to fully unsupervised techniques.
The task of Word Sense Induction (WSI) can be seen as an unsupervised version of WSD. WSI aims at clustering word senses and does not require to map each cluster to a predefined sense. Instead of that, word sense inventories are induced automatically from the clusters, treating each cluster as a single sense of a word. WSI approaches fall into three main groups: context clustering, word ego-network clustering and synonyms (or substitute) clustering.
Context clustering approaches consist in creating vectors which characterise words' contexts and clustering these vectors. Here, the definition of context may vary from window-based context to latent topic-alike context. Afterwards, the resulting clusters are either used as senses directly BIBREF15, or employed further to learn sense embeddings via Chinese Restaurant Process algorithm BIBREF16, AdaGram, a Bayesian extension of the Skip-Gram model BIBREF17, AutoSense, an extension of the LDA topic model BIBREF18, and other techniques.
Word ego-network clustering is applied to semantic graphs. The nodes of a semantic graph are words, and edges between them denote semantic relatedness which is usually evaluated with cosine similarity of the corresponding embeddings BIBREF19 or by PMI-like measures BIBREF20. Word senses are induced via graph clustering algorithms, such as Chinese Whispers BIBREF21 or MaxMax BIBREF22. The technique suggested in our work belongs to this class of methods and is an extension of the method presented by Pelevina:16.
Synonyms and substitute clustering approaches create vectors which represent synonyms or substitutes of polysemous words. Such vectors are created using synonymy dictionaries BIBREF23 or context-dependent substitutes obtained from a language model BIBREF24. Analogously to previously described techniques, word senses are induced by clustering these vectors.
<<</Related Work>>>
<<<Algorithm for Word Sense Induction>>>
The majority of word vector models do not discriminate between multiple senses of individual words. However, a polysemous word can be identified via manual analysis of its nearest neighbours—they reflect different senses of the word. Table TABREF7 shows manually sense-labelled most similar terms to the word Ruby according to the pre-trained fastText model BIBREF8. As it was suggested early by Widdows:02, the distributional properties of a word can be used to construct a graph of words that are semantically related to it, and if a word is polysemous, such graph can easily be partitioned into a number of densely connected subgraphs corresponding to different senses of this word. Our algorithm is based on the same principle.
<<<SenseGram: A Baseline Graph-based Word Sense Induction Algorithm>>>
SenseGram is the method proposed by Pelevina:16 that separates nearest neighbours to induce word senses and constructs sense embeddings for each sense. It starts by constructing an ego-graph (semantic graph centred at a particular word) of the word and its nearest neighbours. The edges between the words denote their semantic relatedness, e.g. the two nodes are joined with an edge if cosine similarity of the corresponding embeddings is higher than a pre-defined threshold. The resulting graph can be clustered into subgraphs which correspond to senses of the word.
The sense vectors are then constructed by averaging embeddings of words in each resulting cluster. In order to use these sense vectors for word sense disambiguation in text, the authors compute the probabilities of sense vectors of a word given its context or the similarity of the sense vectors to the context.
<<</SenseGram: A Baseline Graph-based Word Sense Induction Algorithm>>>
<<<egvi (Ego-Graph Vector Induction): A Novel Word Sense Induction Algorithm>>>
<<<Induction of Sense Inventories>>>
One of the downsides of the described above algorithm is noise in the generated graph, namely, unrelated words and wrong connections. They hamper the separation of the graph. Another weak point is the imbalance in the nearest neighbour list, when a large part of it is attributed to the most frequent sense, not sufficiently representing the other senses. This can lead to construction of incorrect sense vectors.
We suggest a more advanced procedure of graph construction that uses the interpretability of vector addition and subtraction operations in word embedding space BIBREF6 while the previous algorithm only relies on the list of nearest neighbours in word embedding space. The key innovation of our algorithm is the use of vector subtraction to find pairs of most dissimilar graph nodes and construct the graph only from the nodes included in such “anti-edges”. Thus, our algorithm is based on graph-based word sense induction, but it also relies on vector-based operations between word embeddings to perform filtering of graph nodes. Analogously to the work of Pelevina:16, we construct a semantic relatedness graph from a list of nearest neighbours, but we filter this list using the following procedure:
Extract a list $\mathcal {N}$ = {$w_{1}$, $w_{2}$, ..., $w_{N}$} of $N$ nearest neighbours for the target (ego) word vector $w$.
Compute a list $\Delta $ = {$\delta _{1}$, $\delta _{2}$, ..., $\delta _{N}$} for each $w_{i}$ in $\mathcal {N}$, where $\delta _{i}~=~w-w_{i}$. The vectors in $\delta $ contain the components of sense of $w$ which are not related to the corresponding nearest neighbours from $\mathcal {N}$.
Compute a list $\overline{\mathcal {N}}$ = {$\overline{w_{1}}$, $\overline{w_{2}}$, ..., $\overline{w_{N}}$}, such that $\overline{w_{i}}$ is in the top nearest neighbours of $\delta _{i}$ in the embedding space. In other words, $\overline{w_{i}}$ is a word which is the most similar to the target (ego) word $w$ and least similar to its neighbour $w_{i}$. We refer to $\overline{w_{i}}$ as an anti-pair of $w_{i}$. The set of $N$ nearest neighbours and their anti-pairs form a set of anti-edges i.e. pairs of most dissimilar nodes – those which should not be connected: $\overline{E} = \lbrace (w_{1},\overline{w_{1}}), (w_{2},\overline{w_{2}}), ..., (w_{N},\overline{w_{N}})\rbrace $.
To clarify this, consider the target (ego) word $w = \textit {python}$, its top similar term $w_1 = \textit {Java}$ and the resulting anti-pair $\overline{w_i} = \textit {snake}$ which is the top related term of $\delta _1 = w - w_1$. Together they form an anti-edge $(w_i,\overline{w_i})=(\textit {Java}, \textit {snake})$ composed of a pair of semantically dissimilar terms.
Construct $V$, the set of vertices of our semantic graph $G=(V,E)$ from the list of anti-edges $\overline{E}$, with the following recurrent procedure: $V = V \cup \lbrace w_{i}, \overline{w_{i}}: w_{i} \in \mathcal {N}, \overline{w_{i}} \in \mathcal {N}\rbrace $, i.e. we add a word from the list of nearest neighbours and its anti-pair only if both of them are nearest neighbours of the original word $w$. We do not add $w$'s nearest neighbours if their anti-pairs do not belong to $\mathcal {N}$. Thus, we add only words which can help discriminating between different senses of $w$.
Construct the set of edges $E$ as follows. For each $w_{i}~\in ~\mathcal {N}$ we extract a set of its $K$ nearest neighbours $\mathcal {N}^{\prime }_{i} = \lbrace u_{1}, u_{2}, ..., u_{K}\rbrace $ and define $E = \lbrace (w_{i}, u_{j}): w_{i}~\in ~V, u_j~\in ~V, u_{j}~\in ~\mathcal {N}^{\prime }_{i}, u_{j}~\ne ~\overline{w_{i}}\rbrace $. In other words, we remove edges between a word $w_{i}$ and its nearest neighbour $u_j$ if $u_j$ is also its anti-pair. According to our hypothesis, $w_{i}$ and $\overline{w_{i}}$ belong to different senses of $w$, so they should not be connected (i.e. we never add anti-edges into $E$). Therefore, we consider any connection between them as noise and remove it.
Note that $N$ (the number of nearest neighbours for the target word $w$) and $K$ (the number of nearest neighbours of $w_{ci}$) do not have to match. The difference between these parameters is the following. $N$ defines how many words will be considered for the construction of ego-graph. On the other hand, $K$ defines the degree of relatedness between words in the ego-graph — if $K = 50$, then we will connect vertices $w$ and $u$ with an edge only if $u$ is in the list of 50 nearest neighbours of $w$. Increasing $K$ increases the graph connectivity and leads to lower granularity of senses.
According to our hypothesis, nearest neighbours of $w$ are grouped into clusters in the vector space, and each of the clusters corresponds to a sense of $w$. The described vertices selection procedure allows picking the most representative members of these clusters which are better at discriminating between the clusters. In addition to that, it helps dealing with the cases when one of the clusters is over-represented in the nearest neighbour list. In this case, many elements of such a cluster are not added to $V$ because their anti-pairs fall outside the nearest neighbour list. This also improves the quality of clustering.
After the graph construction, the clustering is performed using the Chinese Whispers algorithm BIBREF21. This is a bottom-up clustering procedure that does not require to pre-define the number of clusters, so it can correctly process polysemous words with varying numbers of senses as well as unambiguous words.
Figure FIGREF17 shows an example of the resulting pruned graph of for the word Ruby for $N = 50$ nearest neighbours in terms of the fastText cosine similarity. In contrast to the baseline method by BIBREF19 where all 50 terms are clustered, in the method presented in this section we sparsify the graph by removing 13 nodes which were not in the set of the “anti-edges” i.e. pairs of most dissimilar terms out of these 50 neighbours. Examples of anti-edges i.e. pairs of most dissimilar terms for this graph include: (Haskell, Sapphire), (Garnet, Rails), (Opal, Rubyist), (Hazel, RubyOnRails), and (Coffeescript, Opal).
<<</Induction of Sense Inventories>>>
<<<Labelling of Induced Senses>>>
We label each word cluster representing a sense to make them and the WSD results interpretable by humans. Prior systems used hypernyms to label the clusters BIBREF25, BIBREF26, e.g. “animal” in the “python (animal)”. However, neither hypernyms nor rules for their automatic extraction are available for all 158 languages. Therefore, we use a simpler method to select a keyword which would help to interpret each cluster. For each graph node $v \in V$ we count the number of anti-edges it belongs to: $count(v) = | \lbrace (w_i,\overline{w_i}) : (w_i,\overline{w_i}) \in \overline{E} \wedge (v = w_i \vee v = \overline{w_i}) \rbrace |$. A graph clustering yields a partition of $V$ into $n$ clusters: $V~=~\lbrace V_1, V_2, ..., V_n\rbrace $. For each cluster $V_i$ we define a keyword $w^{key}_i$ as the word with the largest number of anti-edges $count(\cdot )$ among words in this cluster.
<<</Labelling of Induced Senses>>>
<<<Word Sense Disambiguation>>>
We use keywords defined above to obtain vector representations of senses. In particular, we simply use word embedding of the keyword $w^{key}_i$ as a sense representation $\mathbf {s}_i$ of the target word $w$ to avoid explicit computation of sense embeddings like in BIBREF19. Given a sentence $\lbrace w_1, w_2, ..., w_{j}, w, w_{j+1}, ..., w_n\rbrace $ represented as a matrix of word vectors, we define the context of the target word $w$ as $\textbf {c}_w = \dfrac{\sum _{j=1}^{n} w_j}{n}$. Then, we define the most appropriate sense $\hat{s}$ as the sense with the highest cosine similarity to the embedding of the word's context:
<<</Word Sense Disambiguation>>>
<<</egvi (Ego-Graph Vector Induction): A Novel Word Sense Induction Algorithm>>>
<<</Algorithm for Word Sense Induction>>>
<<<System Design>>>
We release a system for on-the-fly WSD for 158 languages. Given textual input, it identifies polysemous words and retrieves senses that are the most appropriate in the context.
<<<Construction of Sense Inventories>>>
To build word sense inventories (sense vectors) for 158 languages, we utilised GPU-accelerated routines for search of similar vectors implemented in Faiss library BIBREF27. The search of nearest neighbours takes substantial time, therefore, acceleration with GPUs helps to significantly reduce the word sense construction time. To further speed up the process, we keep all intermediate results in memory, which results in substantial RAM consumption of up to 200 Gb.
The construction of word senses for all of the 158 languages takes a lot of computational resources and imposes high requirements to the hardware. For calculations, we use in parallel 10–20 nodes of the Zhores cluster BIBREF28 empowered with Nvidia Tesla V100 graphic cards. For each of the languages, we construct inventories based on 50, 100, and 200 neighbours for 100,000 most frequent words. The vocabulary was limited in order to make the computation time feasible. The construction of inventories for one language takes up to 10 hours, with $6.5$ hours on average. Building the inventories for all languages took more than 1,000 hours of GPU-accelerated computations. We release the constructed sense inventories for all the available languages. They contain all the necessary information for using them in the proposed WSD system or in other downstream tasks.
<<</Construction of Sense Inventories>>>
<<<Word Sense Disambiguation System>>>
The first text pre-processing step is language identification, for which we use the fastText language identification models by Bojanowski:17. Then the input is tokenised. For languages which use Latin, Cyrillic, Hebrew, or Greek scripts, we employ the Europarl tokeniser. For Chinese, we use the Stanford Word Segmenter BIBREF29. For Japanese, we use Mecab BIBREF30. We tokenise Vietnamese with UETsegmenter BIBREF31. All other languages are processed with the ICU tokeniser, as implemented in the PyICU project. After the tokenisation, the system analyses all the input words with pre-extracted sense inventories and defines the most appropriate sense for polysemous words.
Figure FIGREF19 shows the interface of the system. It has a textual input form. The automatically identified language of text is shown above. A click on any of the words displays a prompt (shown in black) with the most appropriate sense of a word in the specified context and the confidence score. In the given example, the word Jaguar is correctly identified as a car brand. This system is based on the system by Ustalov:18, extending it with a back-end for multiple languages, language detection, and sense browsing capabilities.
<<</Word Sense Disambiguation System>>>
<<</System Design>>>
<<<Evaluation>>>
We first evaluate our converted embedding models on multi-language lexical similarity and relatedness tasks, as a sanity check, to make sure the word sense induction process did not hurt the general performance of the embeddings. Then, we test the sense embeddings on WSD task.
<<<Lexical Similarity and Relatedness>>>
<<<Experimental Setup>>>
We use the SemR-11 datasets BIBREF32, which contain word pairs with manually assigned similarity scores from 0 (words are not related) to 10 (words are fully interchangeable) for 12 languages: English (en), Arabic (ar), German (de), Spanish (es), Farsi (fa), French (fr), Italian (it), Dutch (nl), Portuguese (pt), Russian (ru), Swedish (sv), Chinese (zh). The task is to assign relatedness scores to these pairs so that the ranking of the pairs by this score is close to the ranking defined by the oracle score. The performance is measured with Pearson correlation of the rankings. Since one word can have several different senses in our setup, we follow Remus:18 and define the relatedness score for a pair of words as the maximum cosine similarity between any of their sense vectors.
We extract the sense inventories from fastText embedding vectors. We set $N=K$ for all our experiments, i.e. the number of vertices in the graph and the maximum number of vertices' nearest neighbours match. We conduct experiments with $N=K$ set to 50, 100, and 200. For each cluster $V_i$ we create a sense vector $s_i$ by averaging vectors that belong to this cluster. We rely on the methodology of BIBREF33 shifting the generated sense vector to the direction of the original word vector: $s_i~=~\lambda ~w + (1-\lambda )~\dfrac{1}{n}~\sum _{u~\in ~V_i} cos(w, u)\cdot u, $ where, $\lambda \in [0, 1]$, $w$ is the embedding of the original word, $cos(w, u)$ is the cosine similarity between $w$ and $u$, and $n=|V_i|$. By introducing the linear combination of $w$ and $u~\in ~V_i$ we enforce the similarity of sense vectors to the original word important for this task. In addition to that, we weight $u$ by their similarity to the original word, so that more similar neighbours contribute more to the sense vector. The shifting parameter $\lambda $ is set to $0.5$, following Remus:18.
A fastText model is able to generate a vector for each word even if it is not represented in the vocabulary, due to the use of subword information. However, our system cannot assemble sense vectors for out-of-vocabulary words, for such words it returns their original fastText vector. Still, the coverage of the benchmark datasets by our vocabulary is at least 85% and approaches 100% for some languages, so we do not have to resort to this back-off strategy very often.
We use the original fastText vectors as a baseline. In this case, we compute the relatedness scores of the two words as a cosine similarity of their vectors.
<<</Experimental Setup>>>
<<<Discussion of Results>>>
We compute the relatedness scores for all benchmark datasets using our sense vectors and compare them to cosine similarity scores of original fastText vectors. The results vary for different languages. Figure FIGREF28 shows the change in Pearson correlation score when switching from the baseline fastText embeddings to our sense vectors. The new vectors significantly improve the relatedness detection for German, Farsi, Russian, and Chinese, whereas for Italian, Dutch, and Swedish the score slightly falls behind the baseline. For other languages, the performance of sense vectors is on par with regular fastText.
<<</Discussion of Results>>>
<<</Lexical Similarity and Relatedness>>>
<<<Analysis>>>
In order to see how the separation of word contexts that we perform corresponds to actual senses of polysemous words, we visualise ego-graphs produced by our method. Figure FIGREF17 shows the nearest neighbours clustering for the word Ruby, which divides the graph into five senses: Ruby-related programming tools, e.g. RubyOnRails (orange cluster), female names, e.g. Josie (magenta cluster), gems, e.g. Sapphire (yellow cluster), programming languages in general, e.g. Haskell (red cluster). Besides, this is typical for fastText embeddings featuring sub-string similarity, one can observe a cluster of different spelling of the word Ruby in green.
Analogously, the word python (see Figure FIGREF35) is divided into the senses of animals, e.g. crocodile (yellow cluster), programming languages, e.g. perl5 (magenta cluster), and conference, e.g. pycon (red cluster).
In addition, we show a qualitative analysis of senses of mouse and apple. Table TABREF38 shows nearest neighbours of the original words separated into clusters (labels for clusters were assigned manually). These inventories demonstrate clear separation of different senses, although it can be too fine-grained. For example, the first and the second cluster for mouse both refer to computer mouse, but the first one addresses the different types of computer mice, and the second one is used in the context of mouse actions. Similarly, we see that iphone and macbook are separated into two clusters. Interestingly, fastText handles typos, code-switching, and emojis by correctly associating all non-standard variants to the word they refer, and our method is able to cluster them appropriately. Both inventories were produced with $K=200$, which ensures stronger connectivity of graph. However, we see that this setting still produces too many clusters. We computed the average numbers of clusters produced by our model with $K=200$ for words from the word relatedness datasets and compared these numbers with the number of senses in WordNet for English and RuWordNet BIBREF35 for Russian (see Table TABREF37). We can see that the number of senses extracted by our method is consistently higher than the real number of senses.
We also compute the average number of senses per word for all the languages and different values of $K$ (see Figure FIGREF36). The average across languages does not change much as we increase $K$. However, for larger $K$ the average exceed the median value, indicating that more languages have lower number of senses per word. At the same time, while at smaller $K$ the maximum average number of senses per word does not exceed 6, larger values of $K$ produce outliers, e.g. English with $12.5$ senses.
Notably, there are no languages with an average number of senses less than 2, while numbers on English and Russian WordNets are considerably lower. This confirms that our method systematically over-generates senses. The presence of outliers shows that this effect cannot be eliminated by further increasing $K$, because the $i$-th nearest neighbour of a word for $i>200$ can be only remotely related to this word, even if the word is rare. Thus, our sense clustering algorithm needs a method of merging spurious senses.
<<</Analysis>>>
<<</Evaluation>>>
<<<Conclusions and Future Work>>>
We present egvi, a new algorithm for word sense induction based on graph clustering that is fully unsupervised and relies on graph operations between word vectors. We apply this algorithm to a large collection of pre-trained fastText word embeddings, releasing sense inventories for 158 languages. These inventories contain all the necessary information for constructing sense vectors and using them in downstream tasks. The sense vectors for polysemous words can be directly retrofitted with the pre-trained word embeddings and do not need any external resources. As one application of these multilingual sense inventories, we present a multilingual word sense disambiguation system that performs unsupervised and knowledge-free WSD for 158 languages without the use of any dictionary or sense-labelled corpus.
The evaluation of quality of the produced sense inventories is performed on multilingual word similarity benchmarks, showing that our sense vectors improve the scores compared to non-disambiguated word embeddings. Therefore, our system in its present state can improve WSD and downstream tasks for languages where knowledge bases, taxonomies, and annotated corpora are not available and supervised WSD models cannot be trained.
A promising direction for future work is combining distributional information from the induced sense inventories with lexical knowledge bases to improve WSD performance. Besides, we encourage the use of the produced word sense inventories in other downstream tasks.
<<</Conclusions and Future Work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\n\nIntroduction\nRelated Work\nAlgorithm for Word Sense Induction\nSenseGram: A Baseline Graph-based Word Sense Induction Algorithm\negvi (Ego-Graph Vector Induction): A Novel Word Sense Induction Algorithm\nInduction of Sense Inventories\nLabelling of Induced Senses\nWord Sense Disambiguation\nSystem Design\nConstruction of Sense Inventories\nWord Sense Disambiguation System\nEvaluation\nLexical Similarity and Relatedness\nExperimental Setup\nDiscussion of Results\nAnalysis\nConclusions and Future Work"
],
"type": "outline"
}
|
1910.04269
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Spoken Language Identification using ConvNets
<<<Abstract>>>
Language Identification (LI) is an important first step in several speech processing systems. With a growing number of voice-based assistants, speech LI has emerged as a widely researched field. To approach the problem of identifying languages, we can either adopt an implicit approach where only the speech for a language is present or an explicit one where text is available with its corresponding transcript. This paper focuses on an implicit approach due to the absence of transcriptive data. This paper benchmarks existing models and proposes a new attention based model for language identification which uses log-Mel spectrogram images as input. We also present the effectiveness of raw waveforms as features to neural network models for LI tasks. For training and evaluation of models, we classified six languages (English, French, German, Spanish, Russian and Italian) with an accuracy of 95.4% and four languages (English, French, German, Spanish) with an accuracy of 96.3% obtained from the VoxForge dataset. This approach can further be scaled to incorporate more languages.
<<</Abstract>>>
<<<Introduction>>>
Language Identification (LI) is a problem which involves classifying the language being spoken by a speaker. LI systems can be used in call centers to route international calls to an operator who is fluent in that identified language BIBREF0. In speech-based assistants, LI acts as the first step which chooses the corresponding grammar from a list of available languages for its further semantic analysis BIBREF1. It can also be used in multi-lingual voice-controlled information retrieval systems, for example, Apple Siri and Amazon Alexa.
Over the years, studies have utilized many prosodic and acoustic features to construct machine learning models for LI systems BIBREF2. Every language is composed of phonemes, which are distinct unit of sounds in that language, such as b of black and g of green. Several prosodic and acoustic features are based on phonemes, which become the underlying features on whom the performance of the statistical model depends BIBREF3, BIBREF4. If two languages have many overlapping phonemes, then identifying them becomes a challenging task for a classifier. For example, the word cat in English, kat in Dutch, katze in German have different consonants but when used in a speech they all would sound quite similar.
Due to such drawbacks several studies have switched over to using Deep Neural Networks (DNNs) to harness their novel auto-extraction techniques BIBREF1, BIBREF5. This work follows an implicit approach for identifying six languages with overlapping phonemes on the VoxForge BIBREF6 dataset and achieves 95.4% overall accuracy.
In previous studies BIBREF1, BIBREF7, BIBREF5, authors use log-Mel spectrum of a raw audio as inputs to their models. One of our contributions is to enhance the performance of this approach by utilising recent techniques like Mixup augmentation of inputs and exploring the effectiveness of Attention mechanism in enhancing performance of neural network. As log-Mel spectrum needs to be computed for each raw audio input and processing time for generating log-Mel spectrum increases linearly with length of audio, this acts as a bottleneck for these models. Hence, we propose the use of raw audio waveforms as inputs to deep neural network which boosts performance by avoiding additional overhead of computing log-Mel spectrum for each audio. Our 1D-ConvNet architecture auto-extracts and classifies features from this raw audio input.
The structure of the work is as follows. In Section 2 we discuss about the previous related studies in this field. The model architecture for both the raw waveforms and log-Mel spectrogram images is discussed in Section 3 along with the a discussion on hyperparameter space exploration. In Section 4 we present the experimental results. Finally, in Section 5 we discuss the conclusions drawn from the experiment and future work.
<<</Introduction>>>
<<<Related Work>>>
Extraction of language dependent features like prosody and phonemes was a popular approach to classify spoken languages BIBREF8, BIBREF9, BIBREF10. Following their success in speaker verification systems, i-vectors have also been used as features in various classification networks. These approaches required significant domain knowledge BIBREF11, BIBREF9. Nowadays most of the attempts on spoken language identification rely on neural networks for meaningful feature extraction and classification BIBREF12, BIBREF13.
Revay et al. BIBREF5 used the ResNet50 BIBREF14 architecture for classifying languages by generating the log-Mel spectra of each raw audio. The model uses a cyclic learning rate where learning rate increases and then decreases linearly. Maximum learning rate for a cycle is set by finding the optimal learning rate using fastai BIBREF15 library. The model classified six languages – English, French, Spanish, Russian, Italian and German – and achieving an accuracy of 89.0%.
Gazeau et al. BIBREF16 in his research showed how Neural Networks, Support Vector Machine and Hidden Markov Model (HMM) can be used to identify French, English, Spanish and German. Dataset was prepared using voice samples from Youtube News BIBREF17and VoxForge BIBREF6 datasets. Hidden Markov models convert speech into a sequence of vectors, was used to capture temporal features in speech. HMMs trained on VoxForge BIBREF6 dataset performed best in comparison to other models proposed by him on same VoxForge dataset. They reported an accuracy of 70.0%.
Bartz et al. BIBREF1 proposed two different hybrid Convolutional Recurrent Neural Networks for language identification. They proposed a new architecture for extracting spatial features from log-Mel spectra of raw audio using CNNs and then using RNNs for capturing temporal features to identify the language. This model achieved an accuracy of 91.0% on Youtube News Dataset BIBREF17. In their second architecture they used the Inception-v3 BIBREF18 architecture to extract spatial features which were then used as input for bi-directional LSTMs to predict the language accurately. This model achieved an accuracy of 96.0% on four languages which were English, German, French and Spanish. They also trained their CNN model (obtained after removing RNN from CRNN model) and the Inception-v3 on their dataset. However they were not able to achieve better results achieving and reported 90% and 95% accuracies, respectively.
Kumar et al. BIBREF0 used Mel-frequency cepstral coefficients (MFCC), Perceptual linear prediction coefficients (PLP), Bark Frequency Cepstral Coefficients (BFCC) and Revised Perceptual Linear Prediction Coefficients (RPLP) as features for language identification. BFCC and RPLP are hybrid features derived using MFCC and PLP. They used two different models based on Vector Quantization (VQ) with Dynamic Time Warping (DTW) and Gaussian Mixture Model (GMM) for classification. These classification models were trained with different features. The authors were able to show that these models worked better with hybrid features (BFCC and RPLP) as compared to conventional features (MFCC and PLP). GMM combined with RPLP features gave the most promising results and achieved an accuracy of 88.8% on ten languages. They designed their own dataset comprising of ten languages being Dutch, English, French, German, Italian, Russian, Spanish, Hindi, Telegu, and Bengali.
Montavon BIBREF7 generated Mel spectrogram as features for a time-delay neural network (TDNN). This network had two-dimensional convolutional layers for feature extraction. An elaborate analysis of how deep architectures outperform their shallow counterparts is presented in this reseacrch. The difficulties in classifying perceptually similar languages like German and English were also put forward in this work. It is mentioned that the proposed approach is less robust to new speakers present in the test dataset. This method was able to achieve an accuracy of 91.2% on dataset comprising of 3 languages – English, French and German.
In Table TABREF1, we summarize the quantitative results of the above previous studies. It includes the model basis, feature description, languages classified and the used dataset along with accuracy obtained. The table also lists the overall results of our proposed models (at the top). The languages used by various authors along with their acronyms are English (En), Spanish (Es), French (Fr), German (De), Russian (Ru), Italian (It), Bengali (Ben), Hindi (Hi) and Telegu (Tel).
<<</Related Work>>>
<<<Proposed Method>>>
<<<Motivations>>>
Several state-of-the-art results on various audio classification tasks have been obtained by using log-Mel spectrograms of raw audio, as features BIBREF19. Convolutional Neural Networks have demonstrated an excellent performance gain in classification of these features BIBREF20, BIBREF21 against other machine learning techniques. It has been shown that using attention layers with ConvNets further enhanced their performance BIBREF22. This motivated us to develop a CNN-based architecture with attention since this approach hasn’t been applied to the task of language identification before.
Recently, using raw audio waveform as features to neural networks has become a popular approach in audio classification BIBREF23, BIBREF22. Raw waveforms have several artifacts which are not effectively captured by various conventional feature extraction techniques like Mel Frequency Cepstral Coefficients (MFCC), Constant Q Transform (CQT), Fast Fourier Transform (FFT), etc.
Audio files are a sequence of spoken words, hence they have temporal features too.A CNN is better at capturing spatial features only and RNNs are better at capturing temporal features as demonstrated by Bartz et al. BIBREF1 using audio files. Therefore, we combined both of these to make a CRNN model.
We propose three types of models to tackle the problem with different approaches, discussed as follows.
<<</Motivations>>>
<<<Description of Features>>>
As an average human's voice is around 300 Hz and according to Nyquist-Shannon sampling theorem all the useful frequencies (0-300 Hz) are preserved with sampling at 8 kHz, therefore, we sampled raw audio files from all six languages at 8 kHz
The average length of audio files in this dataset was about 10.4 seconds and standard deviation was 2.3 seconds. For our experiments, the audio length was set to 10 seconds. If the audio files were shorter than 10 second, then the data was repeated and concatenated. If audio files were longer, then the data was truncated.
<<</Description of Features>>>
<<<Model Description>>>
We applied the following design principles to all our models:
Every convolutional layer is always followed by an appropriate max pooling layer. This helps in containing the explosion of parameters and keeps the model small and nimble.
Convolutional blocks are defined as an individual block with multiple pairs of one convolutional layer and one max pooling layer. Each convolutional block is preceded or succeded by a convolutional layer.
Batch Normalization and Rectified linear unit activations were applied after each convolutional layer. Batch Normalization helps speed up convergence during training of a neural network.
Model ends with a dense layer which acts the final output layer.
<<</Model Description>>>
<<<Model Details: 1D ConvNet>>>
As the sampling rate is 8 kHz and audio length is 10 s, hence the input is raw audio to the models with input size of (batch size, 1, 80000). In Table TABREF10, we present a detailed layer-by-layer illustration of the model along with its hyperparameter.
-10pt
<<<Hyperparameter Optimization:>>>
Tuning hyperparameters is a cumbersome process as the hyperparamter space expands exponentially with the number of parameters, therefore efficient exploration is needed for any feasible study. We used the random search algorithm supported by Hyperopt BIBREF24 library to randomly search for an optimal set of hyperparameters from a given parameter space. In Fig. FIGREF12, various hyperparameters we considered are plotted against the validation accuracy as violin plots. Our observations for each hyperparameter are summarized below:
Number of filters in first layer: We observe that having 128 filters gives better results as compared to other filter values of 32 and 64 in the first layer. A higher number of filters in the first layer of network is able to preserve most of the characteristics of input.
Kernel Size: We varied the receptive fields of convolutional layers by choosing the kernel size from among the set of {3, 5, 7, 9}. We observe that a kernel size of 9 gives better accuracy at the cost of increased computation time and larger number of parameters. A large kernel size is able to capture longer patterns in its input due to bigger receptive power which results in an improved accuracy.
Dropout: Dropout randomly turns-off (sets to 0) various individual nodes during training of the network. In a deep CNN it is important that nodes do not develop a co-dependency amongst each other during training in order to prevent overfitting on training data BIBREF25. Dropout rate of $0.1$ works well for our model. When using a higher dropout rate the network is not able to capture the patterns in training dataset.
Batch Size: We chose batch sizes from amongst the set {32, 64, 128}. There is more noise while calculating error in a smaller batch size as compared to a larger one. This tends to have a regularizing effect during training of the network and hence gives better results. Thus, batch size of 32 works best for the model.
Layers in Convolutional block 1 and 2: We varied the number of layers in both the convolutional blocks. If the number of layers is low, then the network does not have enough depth to capture patterns in the data whereas having large number of layers leads to overfitting on the data. In our network, two layers in the first block and one layer in the second block give optimal results.
<<</Hyperparameter Optimization:>>>
<<</Model Details: 1D ConvNet>>>
<<<Model Details: 2D ConvNet with Attention and bi-directional GRU>>>
Log-Mel spectrogram is the most commonly used method for converting audio into the image domain. The audio data was again sampled at 8 kHz. The input to this model was the log-Mel spectra. We generated log-Mel spectrogram using the LibROSA BIBREF26 library. In Table TABREF16, we present a detailed layer-by-layer illustration of the model along with its hyperparameter.
<<<>>>
We took some specific design choices for this model, which are as follows:
We added residual connections with each convolutional layer. Residual connections in a way makes the model selective of the contributing layers, determines the optimal number of layers required for training and solves the problem of vanishing gradients. Residual connections or skip connections skip training of those layers that do not contribute much in the overall outcome of model.
We added spatial attention BIBREF27 networks to help the model in focusing on specific regions or areas in an image. Spatial attention aids learning irrespective of transformations, scaling and rotation done on the input images making the model more robust and helping it to achieve better results.
We added Channel Attention networks so as to help the model to find interdependencies among color channels of log-Mel spectra. It adaptively assigns importance to each color channel in a deep convolutional multi-channel network. In our model we apply channel and spatial attention just before feeding the input into bi-directional GRU. This helps the model to focus on selected regions and at the same time find patterns among channels to better determine the language.
<<</>>>
<<</Model Details: 2D ConvNet with Attention and bi-directional GRU>>>
<<<Model details: 2D-ConvNet>>>
This model is a similar model to 2D-ConvNet with Attention and bi-directional GRU described in section SECREF13 except that it lacks skip connections, attention layers, bi-directional GRU and the embedding layer incorporated in the previous model.
<<</Model details: 2D-ConvNet>>>
<<<Dataset>>>
We classified six languages (English, French, German, Spanish, Russian and Italian) from the VoxForge BIBREF6 dataset. VoxForge is an open-source speech corpus which primarily consists of samples recorded and submitted by users using their own microphone. This results in significant variation of speech quality between samples making it more representative of real world scenarios.
Our dataset consists of 1,500 samples for each of six languages. Out of 1,500 samples for each language, 1,200 were randomly selected as training dataset for that language and rest 300 as validation dataset using k-fold cross-validation. To sum up, we trained our model on 7,200 samples and validated it on 1800 samples comprising six languages. The results are discussed in next section.
<<</Dataset>>>
<<</Proposed Method>>>
<<<Results and Discussion>>>
This paper discusses two end-to-end approaches which achieve state-of-the-art results in both the image as well as audio domain on the VoxForge dataset BIBREF6. In Table TABREF25, we present all the classification accuracies of the two models of the cases with and without mixup for six and four languages.
In the audio domain (using raw audio waveform as input), 1D-ConvNet achieved a mean accuracy of 93.7% with a standard deviation of 0.3% on running k-fold cross validation. In Fig FIGREF27 (a) we present the confusion matrix for the 1D-ConvNet model.
In the image domain (obtained by taking log-Mel spectra of raw audio), 2D-ConvNet with 2D attention (channel and spatial attention) and bi-directional GRU achieved a mean accuracy of 95.0% with a standard deviation of 1.2% for six languages. This model performed better when mixup regularization was applied. 2D-ConvNet achieved a mean accuracy of 95.4% with standard deviation of 0.6% on running k-fold cross validation for six languages when mixup was applied. In Fig FIGREF27 (b) we present the confusion matrix for the 2D-ConvNet model. 2D attention models focused on the important features extracted by convolutional layers and bi-directional GRU captured the temporal features.
<<<Misclassification>>>
Several of the spoken languages in Europe belong to the Indo-European family. Within this family, the languages are divided into three phyla which are Romance, Germanic and Slavic. Of the 6 languages that we selected Spanish (Es), French (Fr) and Italian (It) belong to the Romance phyla, English and German belong to Germanic phyla and Russian in Slavic phyla. Our model also confuses between languages belonging to the similar phyla which acts as an insanity check since languages in same phyla have many similar pronounced words such as cat in English becomes Katze in German and Ciao in Italian becomes Chao in Spanish.
Our model confuses between French (Fr) and Russian (Ru) while these languages belong to different phyla, many words from French were adopted into Russian such as automate (oot-oo-mate) in French becomes ABTOMaT (aff-taa-maat) in Russian which have similar pronunciation.
<<</Misclassification>>>
<<<Future Scope>>>
The performance of raw audio waveforms as input features to ConvNet can be further improved by applying silence removal in the audio. Also, there is scope for improvement by augmenting available data through various conventional techniques like pitch shifting, adding random noise and changing speed of audio. These help in making neural networks more robust to variations which might be present in real world scenarios. There can be further exploration of various feature extraction techniques like Constant-Q transform and Fast Fourier Transform and assessment of their impact on Language Identification.
There can be further improvements in neural network architectures like concatenating the high level features obtained from 1D-ConvNet and 2D-ConvNet, before performing classification. There can be experiments using deeper networks with skip connections and Inception modules. These are known to have positively impacted the performance of Convolutional Neural Networks.
<<</Future Scope>>>
<<</Results and Discussion>>>
<<<Conclusion>>>
There are two main contributions of this paper in the domain of spoken language identification. Firstly, we presented an extensive analysis of raw audio waveforms as input features to 1D-ConvNet. We experimented with various hyperparameters in our 1D-ConvNet and evaluated their effect on validation accuracy. This method is able to bypass the computational overhead of conventional approaches which depend on generation of spectrograms as a necessary pre-procesing step. We were able to achieve an accauracy of 93.7% using this technique.
Next, we discussed the enhancement in performance of 2D-ConvNet using mixup augmentation, which is a recently developed technique to prevent overfitting on test data.This approach achieved an accuracy of 95.4%. We also analysed how attention mechanism and recurrent layers impact the performance of networks. This approach achieved an accuracy of 95.0%.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nProposed Method\nMotivations\nDescription of Features\nModel Description\nModel Details: 1D ConvNet\nHyperparameter Optimization:\nModel Details: 2D ConvNet with Attention and bi-directional GRU\n\nModel details: 2D-ConvNet\nDataset\nResults and Discussion\nMisclassification\nFuture Scope\nConclusion"
],
"type": "outline"
}
|
2001.00137
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Stacked DeBERT: All Attention in Incomplete Data for Text Classification
<<<Abstract>>>
In this paper, we propose Stacked DeBERT, short for Stacked Denoising Bidirectional Encoder Representations from Transformers. This novel model improves robustness in incomplete data, when compared to existing systems, by designing a novel encoding scheme in BERT, a powerful language representation model solely based on attention mechanisms. Incomplete data in natural language processing refer to text with missing or incorrect words, and its presence can hinder the performance of current models that were not implemented to withstand such noises, but must still perform well even under duress. This is due to the fact that current approaches are built for and trained with clean and complete data, and thus are not able to extract features that can adequately represent incomplete data. Our proposed approach consists of obtaining intermediate input representations by applying an embedding layer to the input tokens followed by vanilla transformers. These intermediate features are given as input to novel denoising transformers which are responsible for obtaining richer input representations. The proposed approach takes advantage of stacks of multilayer perceptrons for the reconstruction of missing words' embeddings by extracting more abstract and meaningful hidden feature vectors, and bidirectional transformers for improved embedding representation. We consider two datasets for training and evaluation: the Chatbot Natural Language Understanding Evaluation Corpus and Kaggle's Twitter Sentiment Corpus. Our model shows improved F1-scores and better robustness in informal/incorrect texts present in tweets and in texts with Speech-to-Text error in the sentiment and intent classification tasks.
<<</Abstract>>>
<<<Introduction>>>
Understanding a user's intent and sentiment is of utmost importance for current intelligent chatbots to respond appropriately to human requests. However, current systems are not able to perform to their best capacity when presented with incomplete data, meaning sentences with missing or incorrect words. This scenario is likely to happen when one considers human error done in writing. In fact, it is rather naive to assume that users will always type fully grammatically correct sentences. Panko BIBREF0 goes as far as claiming that human accuracy regarding research paper writing is none when considering the entire document. This has been aggravated with the advent of internet and social networks, which allowed language and modern communication to be been rapidly transformed BIBREF1, BIBREF2. Take Twitter for instance, where information is expected to be readily communicated in short and concise sentences with little to no regard to correct sentence grammar or word spelling BIBREF3.
Further motivation can be found in Automatic Speech Recognition (ASR) applications, where high error rates prevail and pose an enormous hurdle in the broad adoption of speech technology by users worldwide BIBREF4. This is an important issue to tackle because, in addition to more widespread user adoption, improving Speech-to-Text (STT) accuracy diminishes error propagation to modules using the recognized text. With that in mind, in order for current systems to improve the quality of their services, there is a need for development of robust intelligent systems that are able to understand a user even when faced with incomplete representation in language.
The advancement of deep neural networks have immensely aided in the development of the Natural Language Processing (NLP) domain. Tasks such as text generation, sentence correction, image captioning and text classification, have been possible via models such as Convolutional Neural Networks and Recurrent Neural Networks BIBREF5, BIBREF6, BIBREF7. More recently, state-of-the-art results have been achieved with attention models, more specifically Transformers BIBREF8. Surprisingly, however, there is currently no research on incomplete text classification in the NLP community. Realizing the need of research in that area, we make it the focus of this paper. In this novel task, the model aims to identify the user's intent or sentiment by analyzing a sentence with missing and/or incorrect words. In the sentiment classification task, the model aims to identify the user's sentiment given a tweet, written in informal language and without regards for sentence correctness.
Current approaches for Text Classification tasks focus on efficient embedding representations. Kim et al. BIBREF9 use semantically enriched word embeddings to make synonym and antonym word vectors respectively more and less similar in order to improve intent classification performance. Devlin et al. BIBREF10 propose Bidirectional Encoder Representations from Transformers (BERT), a powerful bidirectional language representation model based on Transformers, achieving state-of-the-art results on eleven NLP tasks BIBREF11, including sentiment text classification. Concurrently, Shridhar et al. BIBREF12 also reach state of the art in the intent recognition task using Semantic Hashing for feature representation followed by a neural classifier. All aforementioned approaches are, however, applied to datasets based solely on complete data.
The incomplete data problem is usually approached as a reconstruction or imputation task and is most often related to missing numbers imputation BIBREF13. Vincent et al. BIBREF14, BIBREF15 propose to reconstruct clean data from its noisy version by mapping the input to meaningful representations. This approach has also been shown to outperform other models, such as predictive mean matching, random forest, Support Vector Machine (SVM) and Multiple imputation by Chained Equations (MICE), at missing data imputation tasks BIBREF16, BIBREF17. Researchers in those two areas have shown that meaningful feature representation of data is of utter importance for high performance achieving methods. We propose a model that combines the power of BERT in the NLP domain and the strength of denoising strategies in incomplete data reconstruction to tackle the tasks of incomplete intent and sentiment classification. This enables the implementation of a novel encoding scheme, more robust to incomplete data, called Stacked Denoising BERT or Stacked DeBERT. Our approach consists of obtaining richer input representations from input tokens by stacking denoising transformers on an embedding layer with vanilla transformers. The embedding layer and vanilla transformers extract intermediate input features from the input tokens, and the denoising transformers are responsible for obtaining richer input representations from them. By improving BERT with stronger denoising abilities, we are able to reconstruct missing and incorrect words' embeddings and improve classification accuracy. To summarize, our contribution is two-fold:
Novel model architecture that is more robust to incomplete data, including missing or incorrect words in text.
Proposal of the novel tasks of incomplete intent and sentiment classification from incorrect sentences, and release of corpora related with these tasks.
The remainder of this paper is organized in four sections, with Section SECREF2 explaining the proposed model. This is followed by Section SECREF3 which includes a detailed description of the dataset used for training and evaluation purposes and how it was obtained. Section SECREF4 covers the baseline models used for comparison, training specifications and experimental results. Finally, Section SECREF5 wraps up this paper with conclusion and future works.
<<</Introduction>>>
<<<Proposed model>>>
We propose Stacked Denoising BERT (DeBERT) as a novel encoding scheming for the task of incomplete intent classification and sentiment classification from incorrect sentences, such as tweets and text with STT error. The proposed model, illustrated in Fig. FIGREF4, is structured as a stacking of embedding layers and vanilla transformer layers, similarly to the conventional BERT BIBREF10, followed by layers of novel denoising transformers. The main purpose of this model is to improve the robustness and efficiency of BERT when applied to incomplete data by reconstructing hidden embeddings from sentences with missing words. By reconstructing these hidden embeddings, we are able to improve the encoding scheme in BERT.
The initial part of the model is the conventional BERT, a multi-layer bidirectional Transformer encoder and a powerful language model. During training, BERT is fine-tuned on the incomplete text classification corpus (see Section SECREF3). The first layer pre-processes the input sentence by making it lower-case and by tokenizing it. It also prefixes the sequence of tokens with a special character `[CLS]' and sufixes each sentence with a `[SEP]' character. It is followed by an embedding layer used for input representation, with the final input embedding being a sum of token embedddings, segmentation embeddings and position embeddings. The first one, token embedding layer, uses a vocabulary dictionary to convert each token into a more representative embedding. The segmentation embedding layer indicates which tokens constitute a sentence by signaling either 1 or 0. In our case, since our data are formed of single sentences, the segment is 1 until the first `[SEP]' character appears (indicating segment A) and then it becomes 0 (segment B). The position embedding layer, as the name indicates, adds information related to the token's position in the sentence. This prepares the data to be considered by the layers of vanilla bidirectional transformers, which outputs a hidden embedding that can be used by our novel layers of denoising transformers.
Although BERT has shown to perform better than other baseline models when handling incomplete data, it is still not enough to completely and efficiently handle such data. Because of that, there is a need for further improvement of the hidden feature vectors obtained from sentences with missing words. With this purpose in mind, we implement a novel encoding scheme consisting of denoising transformers, which is composed of stacks of multilayer perceptrons for the reconstruction of missing words’ embeddings by extracting more abstract and meaningful hidden feature vectors, and bidirectional transformers for improved embedding representation. The embedding reconstruction step is trained on sentence embeddings extracted from incomplete data $h_{inc}$ as input and embeddings corresponding to its complete version $h_{comp}$ as target. Both input and target are obtained after applying the embedding layers and the vanilla transformers, as indicated in Fig. FIGREF4, and have shape $(N_{bs}, 768, 128)$, where $N_{bs}$ is the batch size, 768 is the original BERT embedding size for a single token, and 128 is the maximum sequence length in a sentence.
The stacks of multilayer perceptrons are structured as two sets of three layers with two hidden layers each. The first set is responsible for compressing the $h_{inc}$ into a latent-space representation, extracting more abstract features into lower dimension vectors $z_1$, $z_2$ and $\mathbf {z}$ with shape $(N_{bs}, 128, 128)$, $(N_{bs}, 32, 128)$, and $(N_{bs}, 12, 128)$, respectively. This process is shown in Eq. (DISPLAY_FORM5):
where $f(\cdot )$ is the parameterized function mapping $h_{inc}$ to the hidden state $\mathbf {z}$. The second set then respectively reconstructs $z_1$, $z_2$ and $\mathbf {z}$ into $h_{rec_1}$, $h_{rec_2}$ and $h_{rec}$. This process is shown in Eq. (DISPLAY_FORM6):
where $g(\cdot )$ is the parameterized function that reconstructs $\mathbf {z}$ as $h_{rec}$.
The reconstructed hidden sentence embedding $h_{rec}$ is compared with the complete hidden sentence embedding $h_{comp}$ through a mean square error loss function, as shown in Eq. (DISPLAY_FORM7):
After reconstructing the correct hidden embeddings from the incomplete sentences, the correct hidden embeddings are given to bidirectional transformers to generate input representations. The model is then fine-tuned in an end-to-end manner on the incomplete text classification corpus.
Classification is done with a feedforward network and softmax activation function. Softmax $\sigma $ is a discrete probability distribution function for $N_C$ classes, with the sum of the classes probability being 1 and the maximum value being the predicted class. The predicted class can be mathematically calculated as in Eq. (DISPLAY_FORM8):
where $o = W t + b$, the output of the feedforward layer used for classification.
<<</Proposed model>>>
<<<Dataset>>>
<<<Twitter Sentiment Classification>>>
In order to evaluate the performance of our model, we need access to a naturally noisy dataset with real human errors. Poor quality texts obtained from Twitter, called tweets, are then ideal for our task. For this reason, we choose Kaggle's two-class Sentiment140 dataset BIBREF18, which consists of spoken text being used in writing and without strong consideration for grammar or sentence correctness. Thus, it has many mistakes, as specified in Table TABREF11.
Even though this corpus has incorrect sentences and their emotional labels, they lack their respective corrected sentences, necessary for the training of our model. In order to obtain this missing information, we outsource native English speakers from an unbiased and anonymous platform, called Amazon Mechanical Turk (MTurk) BIBREF19, which is a paid marketplace for Human Intelligence Tasks (HITs). We use this platform to create tasks for native English speakers to format the original incorrect tweets into correct sentences. Some examples are shown in Table TABREF12.
After obtaining the correct sentences, our two-class dataset has class distribution as shown in Table TABREF14. There are 200 sentences used in the training stage, with 100 belonging to the positive sentiment class and 100 to the negative class, and 50 samples being used in the evaluation stage, with 25 negative and 25 positive. This totals in 300 samples, with incorrect and correct sentences combined. Since our goal is to evaluate the model's performance and robustness in the presence of noise, we only consider incorrect data in the testing phase. Note that BERT is a pre-trained model, meaning that small amounts of data are enough for appropriate fine-tuning.
<<</Twitter Sentiment Classification>>>
<<<Intent Classification from Text with STT Error>>>
In the intent classification task, we are presented with a corpus that suffers from the opposite problem of the Twitter sentiment classification corpus. In the intent classification corpus, we have the complete sentences and intent labels but lack their corresponding incomplete sentences, and since our task revolves around text classification in incomplete or incorrect data, it is essential that we obtain this information. To remedy this issue, we apply a Text-to-Speech (TTS) module followed by a Speech-to-Text (STT) module to the complete sentences in order to obtain incomplete sentences with STT error. Due to TTS and STT modules available being imperfect, the resulting sentences have a reasonable level of noise in the form of missing or incorrectly transcribed words. Analysis on this dataset adds value to our work by enabling evaluation of our model's robustness to different rates of data incompleteness.
The dataset used to evaluate the models' performance is the Chatbot Natural Language Unerstanding (NLU) Evaluation Corpus, introduced by Braun et al. BIBREF20 to test NLU services. It is a publicly available benchmark and is composed of sentences obtained from a German Telegram chatbot used to answer questions about public transport connections. The dataset has two intents, namely Departure Time and Find Connection with 100 train and 106 test samples, shown in Table TABREF18. Even though English is the main language of the benchmark, this dataset contains a few German station and street names.
The incomplete dataset used for training is composed of lower-cased incomplete data obtained by manipulating the original corpora. The incomplete sentences with STT error are obtained in a 2-step process shown in Fig. FIGREF22. The first step is to apply a TTS module to the available complete sentence. Here, we apply gtts , a Google Text-to-Speech python library, and macsay , a terminal command available in Mac OS as say. The second step consists of applying an STT module to the obtained audio files in order to obtain text containing STT errors. The STT module used here was witai , freely available and maintained by Wit.ai. The mentioned TTS and STT modules were chosen according to code availability and whether it's freely available or has high daily usage limitations.
Table TABREF24 exemplifies a complete and its respective incomplete sentences with different TTS-STT combinations, thus varying rates of missing and incorrect words. The level of noise in the STT imbued sentences is denoted by a inverted BLEU (iBLEU) score ranging from 0 to 1. The inverted BLEU score is denoted in Eq. (DISPLAY_FORM23):
where BLEU is a common metric usually used in machine translation tasks BIBREF21. We decide to showcase that instead of regular BLEU because it is more indicative to the amount of noise in the incomplete text, where the higher the iBLEU, the higher the noise.
<<</Intent Classification from Text with STT Error>>>
<<</Dataset>>>
<<<Experiments>>>
<<<Baseline models>>>
Besides the already mentioned BERT, the following baseline models are also used for comparison.
<<<NLU service platforms>>>
We focus on the three following services, where the first two are commercial services and last one is open source with two separate backends: Google Dialogflow (formerly Api.ai) , SAP Conversational AI (formerly Recast.ai) and Rasa (spacy and tensorflow backend) .
<<</NLU service platforms>>>
<<<Semantic hashing with classifier>>>
Shridhar et al. BIBREF12 proposed a word embedding method that doesn't suffer from out-of-vocabulary issues. The authors achieve this by using hash tokens in the alphabet instead of a single word, making it vocabulary independent. For classification, classifiers such as Multilayer Perceptron (MLP), Support Vector Machine (SVM) and Random Forest are used. A complete list of classifiers and training specifications are given in Section SECREF31.
<<</Semantic hashing with classifier>>>
<<</Baseline models>>>
<<<Training specifications>>>
The baseline and proposed models are each trained 3 separate times for the incomplete intent classification task: complete data and one for each of the TTS-STT combinations (gtts-witai and macsay-witai). Regarding the sentiment classification from incorrect sentences task, the baseline and proposed models are each trained 3 times: original text, corrected text and incorrect with correct texts. The reported F1 scores are the best accuracies obtained from 10 runs.
<<<BERT>>>
Conventional BERT is a BERT-base-uncased model, meaning that it has 12 transformer blocks $L$, hidden size $H$ of 768, and 12 self-attention heads $A$. The model is fine-tuned with our dataset on 2 Titan X GPUs for 3 epochs with Adam Optimizer, learning rate of $2*10^{-5}$, maximum sequence length of 128, and warm up proportion of $0.1$. The train batch size is 4 for the Twitter Sentiment Corpus and 8 for the Chatbot Intent Classification Corpus.
<<</BERT>>>
<<<Stacked DeBERT>>>
Our proposed model is trained in end-to-end manner on 2 Titan X GPUs, with training time depending on the size of the dataset and train batch size. The stack of multilayer perceptrons are trained for 100 and 1,000 epochs with Adam Optimizer, learning rate of $10^{-3}$, weight decay of $10^{-5}$, MSE loss criterion and batch size the same as BERT (4 for the Twitter Sentiment Corpus and 8 for the Chatbot Intent Classification Corpus).
<<</Stacked DeBERT>>>
<<</Training specifications>>>
<<<Results on Sentiment Classification from Incorrect Text>>>
Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\%$ to 8$\%$. We evaluate our model and baseline models on three versions of the dataset. The first one (Inc) only considers the original data, containing naturally incorrect tweets, and achieves accuracy of 80$\%$ against BERT's 72$\%$. The second version (Corr) considers the corrected tweets, and shows higher accuracy given that it is less noisy. In that version, Stacked DeBERT achieves 82$\%$ accuracy against BERT's 76$\%$, an improvement of 6$\%$. In the last case (Inc+Corr), we consider both incorrect and correct tweets as input to the models in hopes of improving performance. However, the accuracy was similar to the first aforementioned version, 80$\%$ for our model and 74$\%$ for the second highest performing model. Since the first and last corpus gave similar performances with our model, we conclude that the Twitter dataset does not require complete sentences to be given as training input, in addition to the original naturally incorrect tweets, in order to better model the noisy sentences.
In addition to the overall F1-score, we also present a confusion matrix, in Fig. FIGREF38, with the per-class F1-scores for BERT and Stacked DeBERT. The normalized confusion matrix plots the predicted labels versus the target/target labels. Similarly to Table TABREF37, we evaluate our model with the original Twitter dataset, the corrected version and both original and corrected tweets. It can be seen that our model is able to improve the overall performance by improving the accuracy of the lower performing classes. In the Inc dataset, the true class 1 in BERT performs with approximately 50%. However, Stacked DeBERT is able to improve that to 72%, although to a cost of a small decrease in performance of class 0. A similar situation happens in the remaining two datasets, with improved accuracy in class 0 from 64% to 84% and 60% to 76% respectively.
<<</Results on Sentiment Classification from Incorrect Text>>>
<<<Results on Intent Classification from Text with STT Error>>>
Experimental results for the Intent Classification task on the Chatbot NLU Corpus with STT error can be seen in Table TABREF40. When presented with data containing STT error, our model outperforms all baseline models in both combinations of TTS-STT: gtts-witai outperforms the second placing baseline model by 0.94% with F1-score of 97.17%, and macsay-witai outperforms the next highest achieving model by 1.89% with F1-score of 96.23%.
The table also indicates the level of noise in each dataset with the already mentioned iBLEU score, where 0 means no noise and higher values mean higher quantity of noise. As expected, the models' accuracy degrade with the increase in noise, thus F1-scores of gtts-witai are higher than macsay-witai. However, while the other models decay rapidly in the presence of noise, our model does not only outperform them but does so with a wider margin. This is shown with the increasing robustness curve in Fig. FIGREF41 and can be demonstrated by macsay-witai outperforming the baseline models by twice the gap achieved by gtts-witai.
Further analysis of the results in Table TABREF40 show that, BERT decay is almost constant with the addition of noise, with the difference between the complete data and gtts-witai being 1.88 and gtts-witai and macsay-witai being 1.89. Whereas in Stacked DeBERT, that difference is 1.89 and 0.94 respectively. This is stronger indication of our model's robustness in the presence of noise.
Additionally, we also present Fig. FIGREF42 with the normalized confusion matrices for BERT and Stacked DeBERT for sentences containing STT error. Analogously to the Twitter Sentiment Classification task, the per-class F1-scores show that our model is able to improve the overall performance by improving the accuracy of one class while maintaining the high-achieving accuracy of the second one.
<<</Results on Intent Classification from Text with STT Error>>>
<<</Experiments>>>
<<<Conclusion>>>
In this work, we proposed a novel deep neural network, robust to noisy text in the form of sentences with missing and/or incorrect words, called Stacked DeBERT. The idea was to improve the accuracy performance by improving the representation ability of the model with the implementation of novel denoising transformers. More specifically, our model was able to reconstruct hidden embeddings from their respective incomplete hidden embeddings. Stacked DeBERT was compared against three NLU service platforms and two other machine learning methods, namely BERT and Semantic Hashing with neural classifier. Our model showed better performance when evaluated on F1 scores in both Twitter sentiment and intent text with STT error classification tasks. The per-class F1 score was also evaluated in the form of normalized confusion matrices, showing that our model was able to improve the overall performance by better balancing the accuracy of each class, trading-off small decreases in high achieving class for significant improvements in lower performing ones. In the Chatbot dataset, accuracy improvement was achieved even without trade-off, with the highest achieving classes maintaining their accuracy while the lower achieving class saw improvement. Further evaluation on the F1-scores decay in the presence of noise demonstrated that our model is more robust than the baseline models when considering noisy data, be that in the form of incorrect sentences or sentences with STT error. Not only that, experiments on the Twitter dataset also showed improved accuracy in clean data, with complete sentences. We infer that this is due to our model being able to extract richer data representations from the input data regardless of the completeness of the sentence. For future works, we plan on evaluating the robustness of our model against other types of noise, such as word reordering, word insertion, and spelling mistakes in sentences. In order to improve the performance of our model, further experiments will be done in search for more appropriate hyperparameters and more complex neural classifiers to substitute the last feedforward network layer.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nProposed model\nDataset\nTwitter Sentiment Classification\nIntent Classification from Text with STT Error\nExperiments\nBaseline models\nNLU service platforms\nSemantic hashing with classifier\nTraining specifications\nBERT\nStacked DeBERT\nResults on Sentiment Classification from Incorrect Text\nResults on Intent Classification from Text with STT Error\nConclusion"
],
"type": "outline"
}
|
2003.08529
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Diversity, Density, and Homogeneity: Quantitative Characteristic Metrics for Text Collections
<<<Abstract>>>
Summarizing data samples by quantitative measures has a long history, with descriptive statistics being a case in point. However, as natural language processing methods flourish, there are still insufficient characteristic metrics to describe a collection of texts in terms of the words, sentences, or paragraphs they comprise. In this work, we propose metrics of diversity, density, and homogeneity that quantitatively measure the dispersion, sparsity, and uniformity of a text collection. We conduct a series of simulations to verify that each metric holds desired properties and resonates with human intuitions. Experiments on real-world datasets demonstrate that the proposed characteristic metrics are highly correlated with text classification performance of a renowned model, BERT, which could inspire future applications.
<<</Abstract>>>
<<<Introduction>>>
Characteristic metrics are a set of unsupervised measures that quantitatively describe or summarize the properties of a data collection. These metrics generally do not use ground-truth labels and only measure the intrinsic characteristics of data. The most prominent example is descriptive statistics that summarizes a data collection by a group of unsupervised measures such as mean or median for central tendency, variance or minimum-maximum for dispersion, skewness for symmetry, and kurtosis for heavy-tailed analysis.
In recent years, text classification, a category of Natural Language Processing (NLP) tasks, has drawn much attention BIBREF0, BIBREF1, BIBREF2 for its wide-ranging real-world applications such as fake news detection BIBREF3, document classification BIBREF4, and spoken language understanding (SLU) BIBREF5, BIBREF6, BIBREF7, a core task of conversational assistants like Amazon Alexa or Google Assistant.
However, there are still insufficient characteristic metrics to describe a collection of texts. Unlike numeric or categorical data, simple descriptive statistics alone such as word counts and vocabulary size are difficult to capture the syntactic and semantic properties of a text collection.
In this work, we propose a set of characteristic metrics: diversity, density, and homogeneity to quantitatively summarize a collection of texts where the unit of texts could be a phrase, sentence, or paragraph. A text collection is first mapped into a high-dimensional embedding space. Our characteristic metrics are then computed to measure the dispersion, sparsity, and uniformity of the distribution. Based on the choice of embedding methods, these characteristic metrics can help understand the properties of a text collection from different linguistic perspectives, for example, lexical diversity, syntactic variation, and semantic homogeneity. Our proposed diversity, density, and homogeneity metrics extract hard-to-visualize quantitative insight for a better understanding and comparison between text collections.
To verify the effectiveness of proposed characteristic metrics, we first conduct a series of simulation experiments that cover various scenarios in two-dimensional as well as high-dimensional vector spaces. The results show that our proposed quantitative characteristic metrics exhibit several desirable and intuitive properties such as robustness and linear sensitivity of the diversity metric with respect to random down-sampling. Besides, we investigate the relationship between the characteristic metrics and the performance of a renowned model, BERT BIBREF8, on the text classification task using two public benchmark datasets. Our results demonstrate that there are high correlations between text classification model performance and the characteristic metrics, which shows the efficacy of our proposed metrics.
<<</Introduction>>>
<<<Related Work>>>
A building block of characteristic metrics for text collections is the language representation method. A classic way to represent a sentence or a paragraph is n-gram, with dimension equals to the size of vocabulary. More advanced methods learn a relatively low dimensional latent space that represents each word or token as a continuous semantic vector such as word2vec BIBREF9, GloVe BIBREF10, and fastText BIBREF11. These methods have been widely adopted with consistent performance improvements on many NLP tasks. Also, there has been extensive research on representing a whole sentence as a vector such as a plain or weighted average of word vectors BIBREF12, skip-thought vectors BIBREF13, and self-attentive sentence encoders BIBREF14.
More recently, there is a paradigm shift from non-contextualized word embeddings to self-supervised language model (LM) pretraining. Language encoders are pretrained on a large text corpus using a LM-based objective and then re-used for other NLP tasks in a transfer learning manner. These methods can produce contextualized word representations, which have proven to be effective for significantly improving many NLP tasks. Among the most popular approaches are ULMFiT BIBREF2, ELMo BIBREF15, OpenAI GPT BIBREF16, and BERT BIBREF8. In this work, we adopt BERT, a transformer-based technique for NLP pretraining, as the backbone to embed a sentence or a paragraph into a representation vector.
Another stream of related works is the evaluation metrics for cluster analysis. As measuring property or quality of outputs from a clustering algorithm is difficult, human judgment with cluster visualization tools BIBREF17, BIBREF18 are often used. There are unsupervised metrics to measure the quality of a clustering result such as the Calinski-Harabasz score BIBREF19, the Davies-Bouldin index BIBREF20, and the Silhouette coefficients BIBREF21. Complementary to these works that model cross-cluster similarities or relationships, our proposed diversity, density and homogeneity metrics focus on the characteristics of each single cluster, i.e., intra cluster rather than inter cluster relationships.
<<</Related Work>>>
<<<Proposed Characteristic Metrics>>>
We introduce our proposed diversity, density, and homogeneity metrics with their detailed formulations and key intuitions.
Our first assumption is, for classification, high-quality training data entail that examples of one class are as differentiable and distinct as possible from another class. From a fine-grained and intra-class perspective, a robust text cluster should be diverse in syntax, which is captured by diversity. And each example should reflect a sufficient signature of the class to which it belongs, that is, each example is representative and contains certain salient features of the class. We define a density metric to account for this aspect. On top of that, examples should also be semantically similar and coherent among each other within a cluster, where homogeneity comes in play.
The more subtle intuition emerges from the inter-class viewpoint. When there are two or more class labels in a text collection, in an ideal scenario, we would expect the homogeneity to be monotonically decreasing. Potentially, the diversity is increasing with respect to the number of classes since text clusters should be as distinct and separate as possible from one another. If there is a significant ambiguity between classes, the behavior of the proposed metrics and a possible new metric as a inter-class confusability measurement remain for future work.
In practice, the input is a collection of texts $\lbrace x_1, x_2, ..., x_m\rbrace $, where $x_i$ is a sequence of tokens $x_{i1}, x_{i2}, ..., x_{il}$ denoting a phrase, a sentence, or a paragraph. An embedding method $\mathcal {E}$ then transforms $x_i$ into a vector $\mathcal {E}(x_i)=e_i$ and the characteristic metrics are computed with the embedding vectors. For example,
Note that these embedding vectors often lie in a high-dimensional space, e.g. commonly over 300 dimensions. This motivates our design of characteristic metrics to be sensitive to text collections of different properties while being robust to the curse of dimensionality.
We then assume a set of clusters created over the generated embedding vectors. In classification tasks, the embeddings pertaining to members of a class form a cluster, i.e., in a supervised setting. In an unsupervised setting, we may apply a clustering algorithm to the embeddings. It is worth noting that, in general, the metrics are independent of the assumed underlying grouping method.
<<<Diversity>>>
Embedding vectors of a given group of texts $\lbrace e_1, ..., e_m\rbrace $ can be treated as a cluster in the high-dimensional embedding space. We propose a diversity metric to estimate the cluster's dispersion or spreadness via a generalized sense of the radius.
Specifically, if a cluster is distributed as a multi-variate Gaussian with a diagonal covariance matrix $\Sigma $, the shape of an isocontour will be an axis-aligned ellipsoid in $\mathbb {R}^{H}$. Such isocontours can be described as:
where $x$ are all possible points in $\mathbb {R}^{H}$ on an isocontour, $c$ is a constant, $\mu $ is a given mean vector with $\mu _j$ being the value along $j$-th axis, and $\sigma ^2_j$ is the variance of the $j$-th axis.
We leverage the geometric interpretation of this formulation and treat the square root of variance, i.e., standard deviation, $\sqrt{\sigma ^2_j}$ as the radius $r_j$ of the ellipsoid along the $j$-th axis. The diversity metric is then defined as the geometric mean of radii across all axes:
where $\sigma _i$ is the standard deviation or square root of the variance along the $i$-th axis.
In practice, to compute a diversity metric, we first calculate the standard deviation of embedding vectors along each dimension and take the geometric mean of all calculated values. Note that as the geometric mean acts as a dimensionality normalization, it makes the diversity metric work well in high-dimensional embedding spaces such as BERT.
<<</Diversity>>>
<<<Density>>>
Another interesting characteristic is the sparsity of the text embedding cluster. The density metric is proposed to estimate the number of samples that falls within a unit of volume in an embedding space.
Following the assumption mentioned above, a straight-forward definition of the volume can be written as:
up to a constant factor. However, when the dimension goes higher, this formulation easily produces exploding or vanishing density values, i.e., goes to infinity or zero.
To accommodate the impact of high-dimensionality, we impose a dimension normalization. Specifically, we introduce a notion of effective axes, which assumes most variance can be explained or captured in a sub-space of a dimension $\sqrt{H}$. We group all the axes in this sub-space together and compute the geometric mean of their radii as the effective radius. The dimension-normalized volume is then formulated as:
Given a set of embedding vectors $\lbrace e_1, ..., e_m\rbrace $, we define the density metric as:
In practice, the computed density metric values often follow a heavy-tailed distribution, thus sometimes its $\log $ value is reported and denoted as $density (log\-scale)$.
<<</Density>>>
<<<Homogeneity>>>
The homogeneity metric is proposed to summarize the uniformity of a cluster distribution. That is, how uniformly the embedding vectors of the samples in a group of texts are distributed in the embedding space. We propose to quantitatively describe homogeneity by building a fully-connected, edge-weighted network, which can be modeled by a Markov chain model. A Markov chain's entropy rate is calculated and normalized to be in $[0, 1]$ range by dividing by the entropy's theoretical upper bound. This output value is defined as the homogeneity metric detailed as follows:
To construct a fully-connected network from the embedding vectors $\lbrace e_1, ..., e_m\rbrace $, we compute their pairwise distances as edge weights, an idea similar to AttriRank BIBREF22. As the Euclidean distance is not a good metric in high-dimensions, we normalize the distance by adding a power $\log (n\_dim)$. We then define a Markov chain model with the weight of $edge(i, j)$ being
and the conditional probability of transition from $i$ to $j$ can be written as
All the transition probabilities $p(i \rightarrow j)$ are from the transition matrix of a Markov chain. An entropy of this Markov chain can be calculated as
where $\nu _i$ is the stationary distribution of the Markov chain. As self-transition probability $p(i \rightarrow i)$ is always zero because of zero distance, there are $(m - 1)$ possible destinations and the entropy's theoretical upper bound becomes
Our proposed homogeneity metric is then normalized into $[0, 1]$ as a uniformity measure:
The intuition is that if some samples are close to each other but far from all the others, the calculated entropy decreases to reflect the unbalanced distribution. In contrast, if each sample can reach other samples within more-or-less the same distances, the calculated entropy as well as the homogeneity measure would be high as it implies the samples could be more uniformly distributed.
<<</Homogeneity>>>
<<</Proposed Characteristic Metrics>>>
<<<Simulations>>>
To verify that each proposed characteristic metric holds its desirable and intuitive properties, we conduct a series of simulation experiments in 2-dimensional as well as 768-dimensional spaces. The latter has the same dimensionality as the output of our chosen embedding method-BERT, in the following Experiments section.
<<<Simulation Setup>>>
The base simulation setup is a randomly generated isotropic Gaussian blob that contains $10,000$ data points with the standard deviation along each axis to be $1.0$ and is centered around the origin. All Gaussian blobs are created using make_blobs function in the scikit-learn package.
Four simulation scenarios are used to investigate the behavior of our proposed quantitative characteristic metrics:
Down-sampling: Down-sample the base cluster to be $\lbrace 90\%, 80\%, ..., 10\%\rbrace $ of its original size. That is, create Gaussian blobs with $\lbrace 9000, ..., 1000\rbrace $ data points;
Varying Spread: Generate Gaussian blobs with standard deviations of each axis to be $\lbrace 2.0, 3.0, ..., 10.0\rbrace $;
Outliers: Add $\lbrace 50, 100, ..., 500\rbrace $ outlier data points, i.e., $\lbrace 0.5\%, ..., 5\%\rbrace $ of the original cluster size, randomly on the surface with a fixed norm or radius;
Multiple Sub-clusters: Along the 1th-axis, with $10,000$ data points in total, create $\lbrace 1, 2, ..., 10\rbrace $ clusters with equal sample sizes but at increasing distance.
For each scenario, we simulate a cluster and compute the characteristic metrics in both 2-dimensional and 768-dimensional spaces. Figure FIGREF17 visualizes each scenario by t-distributed Stochastic Neighbor Embedding (t-SNE) BIBREF23. The 768-dimensional simulations are visualized by down-projecting to 50 dimensions via Principal Component Analysis (PCA) followed by t-SNE.
<<</Simulation Setup>>>
<<<Simulation Results>>>
Figure FIGREF24 summarizes calculated diversity metrics in the first row, density metrics in the second row, and homogeneity metrics in the third row, for all simulation scenarios.
The diversity metric is robust as its values remain almost the same to the down-sampling of an input cluster. This implies the diversity metric has a desirable property that it is insensitive to the size of inputs. On the other hand, it shows a linear relationship to varying spreads. It is another intuitive property for a diversity metric that it grows linearly with increasing dispersion or variance of input data. With more outliers or more sub-clusters, the diversity metric can also reflect the increasing dispersion of cluster distributions but is less sensitive in high-dimensional spaces.
For the density metrics, it exhibits a linear relationship to the size of inputs when down-sampling, which is desired. When increasing spreads, the trend of density metrics corresponds well with human intuition. Note that the density metrics decrease at a much faster rate in higher-dimensional space as log-scale is used in the figure. The density metrics also drop when adding outliers or having multiple distant sub-clusters. This makes sense since both scenarios should increase the dispersion of data and thus increase our notion of volume as well. In multiple sub-cluster scenario, the density metric becomes less sensitive in the higher-dimensional space. The reason could be that the sub-clusters are distributed only along one axis and thus have a smaller impact on volume in higher-dimensional spaces.
As random down-sampling or increasing variance of each axis should not affect the uniformity of a cluster distribution, we expect the homogeneity metric remains approximately the same values. And the proposed homogeneity metric indeed demonstrates these ideal properties. Interestingly, for outliers, we first saw huge drops of the homogeneity metric but the values go up again slowly when more outliers are added. This corresponds well with our intuitions that a small number of outliers break the uniformity but more outliers should mean an increase of uniformity because the distribution of added outliers themselves has a high uniformity.
For multiple sub-clusters, as more sub-clusters are presented, the homogeneity should and does decrease as the data are less and less uniformly distributed in the space.
To sum up, from all simulations, our proposed diversity, density, and homogeneity metrics indeed capture the essence or intuition of dispersion, sparsity, and uniformity in a cluster distribution.
<<</Simulation Results>>>
<<</Simulations>>>
<<<Experiments>>>
The two real-world text classification tasks we used for experiments are sentiment analysis and Spoken Language Understanding (SLU).
<<<Chosen Embedding Method>>>
BERT is a self-supervised language model pretraining approach based on the Transformer BIBREF24, a multi-headed self-attention architecture that can produce different representation vectors for the same token in various sequences, i.e., contextual embeddings.
When pretraining, BERT concatenates two sequences as input, with special tokens $[CLS], [SEP], [EOS]$ denoting the start, separation, and end, respectively. BERT is then pretrained on a large unlabeled corpus with objective-masked language model (MLM), which randomly masks out tokens, and the model predicts the masked tokens. The other classification task is next sentence prediction (NSP). NSP is to predict whether two sequences follow each other in the original text or not.
In this work, we use the pretrained $\text{BERT}_{\text{BASE}}$ which has 12 layers (L), 12 self-attention heads (A), and 768 hidden dimension (H) as the language embedding to compute the proposed data metrics. The off-the-shelf pretrained BERT is obtained from GluonNLP. For each sequence $x_i = (x_{i1}, ..., x_{il})$ with length $l$, BERT takes $[CLS], x_{i1}, ..., x_{il}, [EOS]$ as input and generates embeddings $\lbrace e_{CLS}, e_{i1}, ..., e_{il}, e_{EOS}\rbrace $ at the token level. To obtain the sequence representation, we use a mean pooling over token embeddings:
where $e_i \in \mathbb {R}^{H}$. A text collection $\lbrace x_1, ..., x_m\rbrace $, i.e., a set of token sequences, is then transformed into a group of H-dimensional vectors $\lbrace e_1, ..., e_m\rbrace $.
We compute each metric as described previously, using three BERT layers L1, L6, and L12 as the embedding space, respectively. The calculated metric values are averaged over layers for each class and averaged over classes weighted by class size as the final value for a dataset.
<<</Chosen Embedding Method>>>
<<<Experimental Setup>>>
In the first task, we use the SST-2 (Stanford Sentiment Treebank, version 2) dataset BIBREF25 to conduct sentiment analysis experiments. SST-2 is a sentence binary classification dataset with train/dev/test splits provided and two types of sentence labels, i.e., positive and negative.
The second task involves two essential problems in SLU, which are intent classification (IC) and slot labeling (SL). In IC, the model needs to detect the intention of a text input (i.e., utterance, conveys). For example, for an input of I want to book a flight to Seattle, the intention is to book a flight ticket, hence the intent class is bookFlight. In SL, the model needs to extract the semantic entities that are related to the intent. From the same example, Seattle is a slot value related to booking the flight, i.e., the destination. Here we experiment with the Snips dataset BIBREF26, which is widely used in SLU research. This dataset contains test spoken utterances (text) classified into one of 7 intents.
In both tasks, we used the open-sourced GluonNLP BERT model to perform text classification. For evaluation, sentiment analysis is measured in accuracy, whereas IC and SL are measured in accuracy and F1 score, respectively. BERT is fine-tuned on train/dev sets and evaluated on test sets.
We down-sampled SST-2 and Snips training sets from $100\%$ to $10\%$ with intervals being $10\%$. BERT's performance is reported for each down-sampled setting in Table TABREF29 and Table TABREF30. We used entire test sets for all model evaluations.
To compare, we compute the proposed data metrics, i.e., diversity, density, and homogeneity, on the original and the down-sampled training sets.
<<</Experimental Setup>>>
<<<Experimental Results>>>
We will discuss the three proposed characteristic metrics, i.e., diversity, density, and homogeneity, and model performance scores from down-sampling experiments on the two public benchmark datasets, in the following subsections:
<<<SST-2>>>
In Table TABREF29, the sentiment classification accuracy is $92.66\%$ without down-sampling, which is consistent with the reported GluonNLP BERT model performance on SST-2. It also indicates SST-2 training data are differentiable between label classes, i.e., from the positive class to the negative class, which satisfies our assumption for the characteristic metrics.
Decreasing the training set size does not reduce performance until it is randomly down-sampled to only $20\%$ of the original size. Meanwhile, density and homogeneity metrics also decrease significantly (highlighted in bold in Table TABREF29), implying a clear relationship between these metrics and model performance.
<<</SST-2>>>
<<<Snips>>>
In Table TABREF30, the Snips dataset seems to be distinct between IC/SL classes since the IC accurcy and SL F1 are as high as $98.71\%$ and $96.06\%$ without down-sampling, respectively. Similar to SST-2, this implies that Snips training data should also support the inter-class differentiability assumption for our proposed characteristic metrics.
IC accuracy on Snips remains higher than $98\%$ until we down-sample the training set to $20\%$ of the original size. In contrast, SL F1 score is more sensitive to the down-sampling of the training set, as it starts decreasing when down-sampling. When the training set is only $10\%$ left, SL F1 score drops to $87.20\%$.
The diversity metric does not decrease immediately until the training set equals to or is less than $40\%$ of the original set. This implies that random sampling does not impact the diversity, if the sampling rate is greater than $40\%$. The training set is very likely to contain redundant information in terms of text diversity. This is supported by what we observed as model has consistently high IC/SL performances between $40\%$-$100\%$ down-sampling ratios.
Moreover, the biggest drop of density and homogeneity (highlighted in bold in Table TABREF30) highly correlates with the biggest IC/SL drop, at the point the training set size is reduced from $20\%$ to $10\%$. This suggests that our proposed metrics can be used as a good indicator of model performance and for characterizing text datasets.
<<</Snips>>>
<<</Experimental Results>>>
<<</Experiments>>>
<<<Analysis>>>
We calculate and show in Table TABREF35 the Pearson's correlations between the three proposed characteristic metrics, i.e., diversity, density, and homogeneity, and model performance scores from down-sampling experiments in Table TABREF29 and Table TABREF30. Correlations higher than $0.5$ are highlighted in bold. As mentioned before, model performance is highly correlated with density and homogeneity, both are computed on the train set. Diversity is only correlated with Snips SL F1 score at a moderate level.
These are consistent with our simulation results, which shows that random sampling of a dataset does not necessarily affect the diversity but can reduce the density and marginally homogeneity due to the decreasing of data points in the embedding space. However, the simultaneous huge drops of model performance, density, and homogeneity imply that there is only limited redundancy and more informative data points are being thrown away when down-sampling. Moreover, results also suggest that model performance on text classification tasks corresponds not only with data diversity but also with training data density and homogeneity as well.
<<</Analysis>>>
<<<Conclusions>>>
In this work, we proposed several characteristic metrics to describe the diversity, density, and homogeneity of text collections without using any labels. Pre-trained language embeddings are used to efficiently characterize text datasets. Simulation and experiments showed that our intrinsic metrics are robust and highly correlated with model performance on different text classification tasks. We would like to apply the diversity, density, and homogeneity metrics for text data augmentation and selection in a semi-supervised manner as our future work.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nProposed Characteristic Metrics\nDiversity\nDensity\nHomogeneity\nSimulations\nSimulation Setup\nSimulation Results\nExperiments\nChosen Embedding Method\nExperimental Setup\nExperimental Results\nSST-2\nSnips\nAnalysis\nConclusions"
],
"type": "outline"
}
|
2003.08553
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
QnAMaker: Data to Bot in 2 Minutes
<<<Abstract>>>
Having a bot for seamless conversations is a much-desired feature that products and services today seek for their websites and mobile apps. These bots help reduce traffic received by human support significantly by handling frequent and directly answerable known questions. Many such services have huge reference documents such as FAQ pages, which makes it hard for users to browse through this data. A conversation layer over such raw data can lower traffic to human support by a great margin. We demonstrate QnAMaker, a service that creates a conversational layer over semi-structured data such as FAQ pages, product manuals, and support documents. QnAMaker is the popular choice for Extraction and Question-Answering as a service and is used by over 15,000 bots in production. It is also used by search interfaces and not just bots.
<<</Abstract>>>
<<<Introduction>>>
QnAMaker aims to simplify the process of bot creation by extracting Question-Answer (QA) pairs from data given by users into a Knowledge Base (KB) and providing a conversational layer over it. KB here refers to one instance of azure search index, where the extracted QA are stored. Whenever a developer creates a KB using QnAMaker, they automatically get all NLP capabilities required to answer user's queries. There are other systems such as Google's Dialogflow, IBM's Watson Discovery which tries to solve this problem. QnAMaker provides unique features for the ease of development such as the ability to add a persona-based chit-chat layer on top of the bot. Additionally, bot developers get automatic feedback from the system based on end-user traffic and interaction which helps them in enriching the KB; we call this feature active-learning. Our system also allows user to add Multi-Turn structure to KB using hierarchical extraction and contextual ranking. QnAMaker today supports over 35 languages, and is the only system among its competitors to follow a Server-Client architecture; all the KB data rests only in the client's subscription, giving users total control over their data. QnAMaker is part of Microsoft Cognitive Service and currently runs using the Microsoft Azure Stack.
<<</Introduction>>>
<<<System description>>>
<<<Architecture>>>
As shown in Figure FIGREF4, humans can have two different kinds of roles in the system: Bot-Developers who want to create a bot using the data they have, and End-Users who will chat with the bot(s) created by bot-developers. The components involved in the process are:
QnAMaker Portal: This is the Graphical User Interface (GUI) for using QnAMaker. This website is designed to ease the use of management APIs. It also provides a test pane.
QnaMaker Management APIs: This is used for the extraction of Question-Answer (QA) pairs from semi-structured content. It then passes these QA pairs to the web app to create the Knowledge Base Index.
Azure Search Index: Stores the KB with questions and answers as indexable columns, thus acting as a retrieval layer.
QnaMaker WebApp: Acts as a layer between the Bot, Management APIs, and Azure Search Index. WebApp does ranking on top of retrieved results. WebApp also handles feedback management for active learning.
Bot: Calls the WebApp with the User's query to get results.
<<</Architecture>>>
<<<Bot Development Process>>>
Creating a bot is a 3-step process for a bot developer:
Create a QnaMaker Resource in Azure: This creates a WebApp with binaries required to run QnAMaker. It also creates an Azure Search Service for populating the index with any given knowledge base, extracted from user data
Use Management APIs to Create/Update/Delete your KB: The Create API automatically extracts the QA pairs and sends the Content to WebApp, which indexes it in Azure Search Index. Developers can also add persona-based chat content and synonyms while creating and updating their KBs.
Bot Creation: Create a bot using any framework and call the WebApp hosted in Azure to get your queries answered. There are Bot-Framework templates provided for the same.
<<</Bot Development Process>>>
<<<Extraction>>>
The Extraction component is responsible for understanding a given document and extracting potential QA pairs. These QA pairs are in turn used to create a KB to be consumed later on by the QnAMaker WebApp to answer user queries. First, the basic blocks from given documents such as text, lines are extracted. Then the layout of the document such as columns, tables, lists, paragraphs, etc is extracted. This is done using Recursive X-Y cut BIBREF0. Following Layout Understanding, each element is tagged as headers, footers, table of content, index, watermark, table, image, table caption, image caption, heading, heading level, and answers. Agglomerative clustering BIBREF1 is used to identify heading and hierarchy to form an intent tree. Leaf nodes from the hierarchy are considered as QA pairs. In the end, the intent tree is further augmented with entities using CRF-based sequence labeling. Intents that are repeated in and across documents are further augmented with their parent intent, adding more context to resolve potential ambiguity.
<<</Extraction>>>
<<<Retrieval And Ranking>>>
QnAMaker uses Azure Search Index as it's retrieval layer, followed by re-ranking on top of retrieved results (Figure FIGREF21). Azure Search is based on inverted indexing and TF-IDF scores. Azure Search provides fuzzy matching based on edit-distance, thus making retrieval robust to spelling mistakes. It also incorporates lemmatization and normalization. These indexes can scale up to millions of documents, lowering the burden on QnAMaker WebApp which gets less than 100 results to re-rank.
Different customers may use QnAMaker for different scenarios such as banking task completion, answering FAQs on company policies, or fun and engagement. The number of QAs, length of questions and answers, number of alternate questions per QA can vary significantly across different types of content. Thus, the ranker model needs to use features that are generic enough to be relevant across all use cases.
<<<Pre-Processing>>>
The pre-processing layer uses components such as Language Detection, Lemmatization, Speller, and Word Breaker to normalize user queries. It also removes junk characters and stop-words from the user's query.
<<</Pre-Processing>>>
<<<Features>>>
Going into granular features and the exact empirical formulas used is out of the scope of this paper. The broad level features used while ranking are:
WordNet: There are various features generated using WordNet BIBREF2 matching with questions and answers. This takes care of word-level semantics. For instance, if there is information about “price of furniture" in a KB and the end-user asks about “price of table", the user will likely get a relevant answer. The scores of these WordNet features are calculated as a function of:
Distance of 2 words in the WordNet graph
Distance of Lowest Common Hypernym from the root
Knowledge-Base word importance (Local IDFs)
Global word importance (Global IDFs)
This is the most important feature in our model as it has the highest relative feature gain.
CDSSM: Convolutional Deep Structured Semantic Models BIBREF3 are used for sentence-level semantic matching. This is a dual encoder model that converts text strings (sentences, queries, predicates, entity mentions, etc) into their vector representations. These models are trained using millions of Bing Query Title Click-Through data. Using the source-model for vectorizing user query and target-model for vectorizing answer, we compute the cosine similarity between these two vectors, giving the relevance of answer corresponding to the query.
TF-IDF: Though sentence-to-vector models are trained on huge datasets, they fail to effectively disambiguate KB specific data. This is where a standard TF-IDF BIBREF4 featurizer with local and global IDFs helps.
<<</Features>>>
<<<Contextual Features>>>
We extend the features for contextual ranking by modifying the candidate QAs and user query in these ways:
$Query_{modified}$ = Query + Previous Answer; For instance, if user query is “yes" and the previous answer is “do you want to know about XYZ", the current query becomes “do you want to know about XYZ yes".
Candidate QnA pairs are appended with its parent Questions and Answers; no contextual information is used from the user's query. For instance, if a candidate QnA has a question “benefits" and its parent question was “know about XYZ", the candidate QA's question is changed to “know about XYZ benefits".
The features mentioned in Section SECREF20 are calculated for the above combinations also. These features carry contextual information.
<<</Contextual Features>>>
<<<Modeling and Training>>>
We use gradient-boosted decision trees as our ranking model to combine all the features. Early stopping BIBREF5 based on Generality-to-Progress ratio is used to decide the number of step trees and Tolerant Pruning BIBREF6 helps prevent overfitting. We follow incremental training if there is small changes in features or training data so that the score distribution is not changed drastically.
<<</Modeling and Training>>>
<<</Retrieval And Ranking>>>
<<<Persona Based Chit-Chat>>>
We add support for bot-developers to directly enable handling chit-chat queries like “hi", “thank you", “what's up" in their QnAMaker bots. In addition to chit-chat, we also give bot developers the flexibility to ground responses for such queries in a specific personality: professional, witty, friendly, caring, or enthusiastic. For example, the “Humorous" personality can be used for a casual bot, whereas a “Professional" personality is more suited in case of banking FAQs or task-completion bots. There is a list of 100+ predefined intents BIBREF7. There is a curated list of queries for each of these intents, along with a separate query understanding layer for ranking these intents. The arbitration between chit-chat answers and user's knowledge base answers is handled by using a chat-domain classifier BIBREF8.
<<</Persona Based Chit-Chat>>>
<<<Active Learning>>>
The majority of the KBs are created using existing FAQ pages or manuals but to improve the quality it requires effort from the developers. Active learning generates suggestions based on end-user feedback as well as ranker's implicit signals. For instance, if for a query, CDSSM feature was confident that one QnA should be ranked higher whereas wordnet feature thought other QnA should be ranked higher, active learning system will try to disambiguate it by showing this as a suggestion to the bot developer. To avoid showing similar suggestions to developers, DB-Scan clustering is done which optimizes the number of suggestions shown.
<<</Active Learning>>>
<<</System description>>>
<<<Evaluation and Insights>>>
QnAMaker is not domain-specific and can be used for any type of data. To support this claim, we measure our system's performance for datasets across various domains. The evaluations are done by managed judges who understands the knowledge base and then judge user queries relevance to the QA pairs (binary labels). Each query-QA pair is judged by two judges. We filter out data for which judges do not agree on the label. Chit-chat in itself can be considered as a domain. Thus, we evaluate performance on given KB both with and without chit-chat data (last two rows in Table TABREF19), as well as performance on just chit-chat data (2nd row in Table TABREF19). Hybrid of deep learning(CDSSM) and machine learning features give our ranking model low computation cost, high explainability and significant F1/AUC score. Based on QnAMaker usage, we observed these trends:
Around 27% of the knowledge bases created use pre-built persona-based chitchat, out of which, $\sim $4% of the knowledge bases are created for chit-chat alone. The highest used personality is Professional which is used in 9% knowledge bases.
Around $\sim $25% developers have enabled active learning suggestions. The acceptance to reject ratio for active learning suggestions is 0.31.
25.5% of the knowledge bases use one URL as a source while creation. $\sim $41% of the knowledge bases created use different sources like multiple URLs. 15.19% of the knowledge bases use both URL and editorial content as sources. Rest use just editorial content.
<<</Evaluation and Insights>>>
<<<Demonstration>>>
We demonstrate QnAMaker: a service to add a conversational layer over semi-structured user data. In addition to query-answering, we support novel features like personality-grounded chit-chat, active learning based on user-interaction feedback (Figure FIGREF40), and hierarchical extraction for multi-turn conversations (Figure FIGREF41). The goal of the demonstration will be to show how easy it is to create an intelligent bot using QnAMaker. All the demonstrations will be done on the production website Demo Video can be seen here.
<<</Demonstration>>>
<<<Future Work>>>
The system currently doesn't highlight the answer span and does not generate answers taking the KB as grounding. We will be soon supporting Answer Span BIBREF9 and KB-grounded response generation BIBREF10 in QnAMaker. We are also working on user-defined personas for chit-chat (automatically learned from user-documents). We aim to enhance our extraction to be able to work for any unstructured document as well as images. We are also experimenting on improving our ranking system by using semantic vector-based search as our retrieval and transformer-based models for re-ranking.
<<</Future Work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nSystem description\nArchitecture\nBot Development Process\nExtraction\nRetrieval And Ranking\nPre-Processing\nFeatures\nContextual Features\nModeling and Training\nPersona Based Chit-Chat\nActive Learning\nEvaluation and Insights\nDemonstration\nFuture Work"
],
"type": "outline"
}
|
1909.12140
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
DisSim: A Discourse-Aware Syntactic Text Simplification Frameworkfor English and German
<<<Abstract>>>
We introduce DisSim, a discourse-aware sentence splitting framework for English and German whose goal is to transform syntactically complex sentences into an intermediate representation that presents a simple and more regular structure which is easier to process for downstream semantic applications. For this purpose, we turn input sentences into a two-layered semantic hierarchy in the form of core facts and accompanying contexts, while identifying the rhetorical relations that hold between them. In that way, we preserve the coherence structure of the input and, hence, its interpretability for downstream tasks.
<<</Abstract>>>
<<<Introduction>>>
We developed a syntactic text simplification (TS) approach that can be used as a preprocessing step to facilitate and improve the performance of a wide range of artificial intelligence (AI) tasks, such as Machine Translation, Information Extraction (IE) or Text Summarization. Since shorter sentences are generally better processed by natural language processing (NLP) systems BIBREF0, the goal of our approach is to break down a complex source sentence into a set of minimal propositions, i.e. a sequence of sound, self-contained utterances, with each of them presenting a minimal semantic unit that cannot be further decomposed into meaningful propositions BIBREF1.
However, any sound and coherent text is not simply a loose arrangement of self-contained units, but rather a logical structure of utterances that are semantically connected BIBREF2. Consequently, when carrying out syntactic simplification operations without considering discourse implications, the rewriting may easily result in a disconnected sequence of simplified sentences that lack important contextual information, making the text harder to interpret. Thus, in order to preserve the coherence structure and, hence, the interpretability of the input, we developed a discourse-aware TS approach based on Rhetorical Structure Theory (RST) BIBREF3. It establishes a contextual hierarchy between the split components, and identifies and classifies the semantic relationship that holds between them. In that way, a complex source sentence is turned into a so-called discourse tree, consisting of a set of hierarchically ordered and semantically interconnected sentences that present a simplified syntax which is easier to process for downstream semantic applications and may support a faster generalization in machine learning tasks.
<<</Introduction>>>
<<<System Description>>>
We present DisSim, a discourse-aware sentence splitting approach for English and German that creates a semantic hierarchy of simplified sentences. It takes a sentence as input and performs a recursive transformation process that is based upon a small set of 35 hand-crafted grammar rules for the English version and 29 rules for the German approach. These patterns were heuristically determined in a comprehensive linguistic analysis and encode syntactic and lexical features that can be derived from a sentence's parse tree. Each rule specifies (1) how to split up and rephrase the input into structurally simplified sentences and (2) how to set up a semantic hierarchy between them. They are recursively applied on a given source sentence in a top-down fashion. When no more rule matches, the algorithm stops and returns the generated discourse tree.
<<<Split into Minimal Propositions>>>
In a first step, source sentences that present a complex linguistic form are turned into clean, compact structures by decomposing clausal and phrasal components. For this purpose, the transformation rules encode both the splitting points and rephrasing procedure for reconstructing proper sentences.
<<</Split into Minimal Propositions>>>
<<<Establish a Semantic Hierarchy>>>
Each split will create two or more sentences with a simplified syntax. To establish a semantic hierarchy between them, two subtasks are carried out:
<<<Constituency Type Classification.>>>
First, we set up a contextual hierarchy between the split sentences by connecting them with information about their hierarchical level, similar to the concept of nuclearity in RST. For this purpose, we distinguish core sentences (nuclei), which carry the key information of the input, from accompanying contextual sentences (satellites) that disclose additional information about it. To differentiate between those two types of constituents, the transformation patterns encode a simple syntax-based approach where subordinate clauses/phrases are classified as context sentences, while superordinate as well as coordinate clauses/phrases are labelled as core.
<<</Constituency Type Classification.>>>
<<<Rhetorical Relation Identification.>>>
Second, we aim to restore the semantic relationship between the disembedded components. For this purpose, we identify and classify the rhetorical relations that hold between the simplified sentences, making use of both syntactic features, which are derived from the input's parse tree structure, and lexical features in the form of cue phrases. Following the work of Taboada13, they are mapped to a predefined list of rhetorical cue words to infer the type of rhetorical relation.
<<</Rhetorical Relation Identification.>>>
<<</Establish a Semantic Hierarchy>>>
<<</System Description>>>
<<<Usage>>>
DisSim can be either used as a Java API, imported as a Maven dependency, or as a service which we provide through a command line interface or a REST-like web service that can be deployed via docker. It takes as input NL text in the form of a single sentence. Alternatively, a file containing a sequence of sentences can be loaded. The result of the transformation process is either written to the console or stored in a specified output file in JSON format. We also provide a browser-based user interface, where the user can directly type in sentences to be processed (see Figure FIGREF1).
<<</Usage>>>
<<<Experiments>>>
For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains in order to assess the performance of our framework with regard to the sentence splitting subtask. The results show that our proposed sentence splitting approach outperforms the state of the art in structural TS, returning fine-grained simplified sentences that achieve a high level of grammaticality and preserve the meaning of the input. The full evaluation methodology and detailed results are reported in niklaus-etal-2019-transforming. In addition, a comparative analysis with the annotations contained in the RST Discourse Treebank BIBREF6 demonstrates that we are able to capture the contextual hierarchy between the split sentences with a precision of almost 90% and reach an average precision of approximately 70% for the classification of the rhetorical relations that hold between them. The evaluation of the German version is in progress.
<<</Experiments>>>
<<<Application in Downstream Tasks>>>
An extrinsic evaluation was carried out on the task of Open IE BIBREF7. It revealed that when applying DisSim as a preprocessing step, the performance of state-of-the-art Open IE systems can be improved by up to 346% in precision and 52% in recall, i.e. leading to a lower information loss and a higher accuracy of the extracted relations. For details, the interested reader may refer to niklaus-etal-2019-transforming.
Moreover, most current Open IE approaches output only a loose arrangement of extracted tuples that are hard to interpret as they ignore the context under which a proposition is complete and correct and thus lack the expressiveness needed for a proper interpretation of complex assertions BIBREF8. As illustrated in Figure FIGREF9, with the help of the semantic hierarchy generated by our discourse-aware sentence splitting approach the output of Open IE systems can be easily enriched with contextual information that allows to restore the semantic relationship between a set of propositions and, hence, preserve their interpretability in downstream tasks.
<<</Application in Downstream Tasks>>>
<<<Conclusion>>>
We developed and implemented a discourse-aware syntactic TS approach that recursively splits and rephrases complex English or German sentences into a semantic hierarchy of simplified sentences. The resulting lightweight semantic representation can be used to facilitate and improve a variety of AI tasks.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nSystem Description\nSplit into Minimal Propositions\nEstablish a Semantic Hierarchy\nConstituency Type Classification.\nRhetorical Relation Identification.\nUsage\nExperiments\nApplication in Downstream Tasks\nConclusion"
],
"type": "outline"
}
|
2002.11893
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
CrossWOZ: A Large-Scale Chinese Cross-Domain Task-Oriented Dialogue Dataset
<<<Abstract>>>
To advance multi-domain (cross-domain) dialogue modeling as well as alleviate the shortage of Chinese task-oriented datasets, we propose CrossWOZ, the first large-scale Chinese Cross-Domain Wizard-of-Oz task-oriented dataset. It contains 6K dialogue sessions and 102K utterances for 5 domains, including hotel, restaurant, attraction, metro, and taxi. Moreover, the corpus contains rich annotation of dialogue states and dialogue acts at both user and system sides. About 60% of the dialogues have cross-domain user goals that favor inter-domain dependency and encourage natural transition across domains in conversation. We also provide a user simulator and several benchmark models for pipelined task-oriented dialogue systems, which will facilitate researchers to compare and evaluate their models on this corpus. The large size and rich annotation of CrossWOZ make it suitable to investigate a variety of tasks in cross-domain dialogue modeling, such as dialogue state tracking, policy learning, user simulation, etc.
<<</Abstract>>>
<<<Introduction>>>
Recently, there have been a variety of task-oriented dialogue models thanks to the prosperity of neural architectures BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. However, the research is still largely limited by the availability of large-scale high-quality dialogue data. Many corpora have advanced the research of task-oriented dialogue systems, most of which are single domain conversations, including ATIS BIBREF6, DSTC 2 BIBREF7, Frames BIBREF8, KVRET BIBREF9, WOZ 2.0 BIBREF10 and M2M BIBREF11.
Despite the significant contributions to the community, these datasets are still limited in size, language variation, or task complexity. Furthermore, there is a gap between existing dialogue corpora and real-life human dialogue data. In real-life conversations, it is natural for humans to transition between different domains or scenarios while still maintaining coherent contexts. Thus, real-life dialogues are much more complicated than those dialogues that are only simulated within a single domain. To address this issue, some multi-domain corpora have been proposed BIBREF12, BIBREF13. The most notable corpus is MultiWOZ BIBREF12, a large-scale multi-domain dataset which consists of crowdsourced human-to-human dialogues. It contains 10K dialogue sessions and 143K utterances for 7 domains, with annotation of system-side dialogue states and dialogue acts. However, the state annotations are noisy BIBREF14, and user-side dialogue acts are missing. The dependency across domains is simply embodied in imposing the same pre-specified constraints on different domains, such as requiring both a hotel and an attraction to locate in the center of the town.
In comparison to the abundance of English dialogue data, surprisingly, there is still no widely recognized Chinese task-oriented dialogue corpus. In this paper, we propose CrossWOZ, a large-scale Chinese multi-domain (cross-domain) task-oriented dialogue dataset. An dialogue example is shown in Figure FIGREF1. We compare CrossWOZ to other corpora in Table TABREF5 and TABREF6. Our dataset has the following features comparing to other corpora (particularly MultiWOZ BIBREF12):
The dependency between domains is more challenging because the choice in one domain will affect the choices in related domains in CrossWOZ. As shown in Figure FIGREF1 and Table TABREF6, the hotel must be near the attraction chosen by the user in previous turns, which requires more accurate context understanding.
It is the first Chinese corpus that contains large-scale multi-domain task-oriented dialogues, consisting of 6K sessions and 102K utterances for 5 domains (attraction, restaurant, hotel, metro, and taxi).
Annotation of dialogue states and dialogue acts is provided for both the system side and user side. The annotation of user states enables us to track the conversation from the user's perspective and can empower the development of more elaborate user simulators.
In this paper, we present the process of dialogue collection and provide detailed data analysis of the corpus. Statistics show that our cross-domain dialogues are complicated. To facilitate model comparison, benchmark models are provided for different modules in pipelined task-oriented dialogue systems, including natural language understanding, dialogue state tracking, dialogue policy learning, and natural language generation. We also provide a user simulator, which will facilitate the development and evaluation of dialogue models on this corpus. The corpus and the benchmark models are publicly available at https://github.com/thu-coai/CrossWOZ.
<<</Introduction>>>
<<<Related Work>>>
According to whether the dialogue agent is human or machine, we can group the collection methods of existing task-oriented dialogue datasets into three categories. The first one is human-to-human dialogues. One of the earliest and well-known ATIS dataset BIBREF6 used this setting, followed by BIBREF8, BIBREF9, BIBREF10, BIBREF15, BIBREF16 and BIBREF12. Though this setting requires many human efforts, it can collect natural and diverse dialogues. The second one is human-to-machine dialogues, which need a ready dialogue system to converse with humans. The famous Dialogue State Tracking Challenges provided a set of human-to-machine dialogue data BIBREF17, BIBREF7. The performance of the dialogue system will largely influence the quality of dialogue data. The third one is machine-to-machine dialogues. It needs to build both user and system simulators to generate dialogue outlines, then use templates BIBREF3 to generate dialogues or further employ people to paraphrase the dialogues to make them more natural BIBREF11, BIBREF13. It needs much less human effort. However, the complexity and diversity of dialogue policy are limited by the simulators. To explore dialogue policy in multi-domain scenarios, and to collect natural and diverse dialogues, we resort to the human-to-human setting.
Most of the existing datasets only involve single domain in one dialogue, except MultiWOZ BIBREF12 and Schema BIBREF13. MultiWOZ dataset has attracted much attention recently, due to its large size and multi-domain characteristics. It is at least one order of magnitude larger than previous datasets, amounting to 8,438 dialogues and 115K turns in the training set. It greatly promotes the research on multi-domain dialogue modeling, such as policy learning BIBREF18, state tracking BIBREF19, and context-to-text generation BIBREF20. Recently the Schema dataset is collected in a machine-to-machine fashion, resulting in 16,142 dialogues and 330K turns for 16 domains in the training set. However, the multi-domain dependency in these two datasets is only embodied in imposing the same pre-specified constraints on different domains, such as requiring a restaurant and an attraction to locate in the same area, or the city of a hotel and the destination of a flight to be the same (Table TABREF6).
Table TABREF5 presents a comparison between our dataset with other task-oriented datasets. In comparison to MultiWOZ, our dataset has a comparable scale: 5,012 dialogues and 84K turns in the training set. The average number of domains and turns per dialogue are larger than those of MultiWOZ, which indicates that our task is more complex. The cross-domain dependency in our dataset is natural and challenging. For example, as shown in Table TABREF6, the system needs to recommend a hotel near the attraction chosen by the user in previous turns. Thus, both system recommendation and user selection will dynamically impact the dialogue. We also allow the same domain to appear multiple times in a user goal since a tourist may want to go to more than one attraction.
To better track the conversation flow and model user dialogue policy, we provide annotation of user states in addition to system states and dialogue acts. While the system state tracks the dialogue history, the user state is maintained by the user and indicates whether the sub-goals have been completed, which can be used to predict user actions. This information will facilitate the construction of the user simulator.
To the best of our knowledge, CrossWOZ is the first large-scale Chinese dataset for task-oriented dialogue systems, which will largely alleviate the shortage of Chinese task-oriented dialogue corpora that are publicly available.
<<</Related Work>>>
<<<Data Collection>>>
Our corpus is to simulate scenarios where a traveler seeks tourism information and plans her or his travel in Beijing. Domains include hotel, attraction, restaurant, metro, and taxi. The data collection process is summarized as below:
Database Construction: we crawled travel information in Beijing from the Web, including Hotel, Attraction, and Restaurant domains (hereafter we name the three domains as HAR domains). Then, we used the metro information of entities in HAR domains to build the metro database. For the taxi domain, there is no need to store the information. Instead, we can call the API directly if necessary.
Goal Generation: a multi-domain goal generator was designed based on the database. The relation across domains is captured in two ways. One is to constrain two targets that locate near each other. The other is to use a taxi or metro to commute between two targets in HAR domains mentioned in the context. To make workers understand the task more easily, we crafted templates to generate natural language descriptions for each structured goal.
Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states.
Dialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. To evaluate the quality of the annotation of dialogue acts and states, three experts were employed to manually annotate dialogue acts and states for 50 dialogues. The results show that our annotations are of high quality. Finally, each dialogue contains a structured goal, a task description, user states, system states, dialogue acts, and utterances.
<<<Database Construction>>>
We collected 465 attractions, 951 restaurants, and 1,133 hotels in Beijing from the Web. Some statistics are shown in Table TABREF11. There are three types of slots for each entity: common slots such as name and address; binary slots for hotel services such as wake-up call; nearby attractions/restaurants/hotels slots that contain nearby entities in the attraction, restaurant, and hotel domains. Since it is not usual to find another nearby hotel in the hotel domain, we did not collect such information. This nearby relation allows us to generate natural cross-domain goals, such as "find another attraction near the first one" and "find a restaurant near the attraction". Nearest metro stations of HAR entities form the metro database. In contrast, we provided the pseudo car type and plate number for the taxi domain.
<<</Database Construction>>>
<<<Goal Generation>>>
To avoid generating overly complex goals, each goal has at most five sub-goals. To generate more natural goals, the sub-goals can be of the same domain, such as two attractions near each other. The goal is represented as a list of (sub-goal id, domain, slot, value) tuples, named as semantic tuples. The sub-goal id is used to distinguish sub-goals which may be in the same domain. There are two types of slots: informable slots which are the constraints that the user needs to inform the system, and requestable slots which are the information that the user needs to inquire from the system. As shown in Table TABREF13, besides common informable slots (italic values) whose values are determined before the conversation, we specially design cross-domain informable slots (bold values) whose values refer to other sub-goals. Cross-domain informable slots utilize sub-goal id to connect different sub-goals. Thus the actual constraints vary according to the different contexts instead of being pre-specified. The values of common informable slots are sampled randomly from the database. Based on the informable slots, users are required to gather the values of requestable slots (blank values in Table TABREF13) through conversation.
There are four steps in goal generation. First, we generate independent sub-goals in HAR domains. For each domain in HAR domains, with the same probability $\mathcal {P}$ we generate a sub-goal, while with the probability of $1-\mathcal {P}$ we do not generate any sub-goal for this domain. Each sub-goal has common informable slots and requestable slots. As shown in Table TABREF15, all slots of HAR domains can be requestable slots, while the slots with an asterisk can be common informable slots.
Second, we generate cross-domain sub-goals in HAR domains. For each generated sub-goal (e.g., the attraction sub-goal in Table TABREF13), if its requestable slots contain "nearby hotels", we generate an additional sub-goal in the hotel domain (e.g., the hotel sub-goal in Table TABREF13) with the probability of $\mathcal {P}_{attraction\rightarrow hotel}$. Of course, the selected hotel must satisfy the nearby relation to the attraction entity. Similarly, we do not generate any additional sub-goal in the hotel domain with the probability of $1-\mathcal {P}_{attraction\rightarrow hotel}$. This also works for the attraction and restaurant domains. $\mathcal {P}_{hotel\rightarrow hotel}=0$ since we do not allow the user to find the nearby hotels of one hotel.
Third, we generate sub-goals in the metro and taxi domains. With the probability of $\mathcal {P}_{taxi}$, we generate a sub-goal in the taxi domain (e.g., the taxi sub-goal in Table TABREF13) to commute between two entities of HAR domains that are already generated. It is similar for the metro domain and we set $\mathcal {P}_{metro}=\mathcal {P}_{taxi}$. All slots in the metro or taxi domain appear in the sub-goals and must be filled. As shown in Table TABREF15, from and to slots are always cross-domain informable slots, while others are always requestable slots.
Last, we rearrange the order of the sub-goals to generate more natural and logical user goals. We require that a sub-goal should be followed by its referred sub-goal as immediately as possible.
To make the workers aware of this cross-domain feature, we additionally provide a task description for each user goal in natural language, which is generated from the structured goal by hand-crafted templates.
Compared with the goals whose constraints are all pre-specified, our goals impose much more dependency between different domains, which will significantly influence the conversation. The exact values of cross-domain informable slots are finally determined according to the dialogue context.
<<</Goal Generation>>>
<<<Dialogue Collection>>>
We developed a specialized website that allows two workers to converse synchronously and make annotations online. On the website, workers are free to choose one of the two roles: tourist (user) or system (wizard). Then, two paired workers are sent to a chatroom. The user needs to accomplish the allocated goal through conversation while the wizard searches the database to provide the necessary information and gives responses. Before the formal data collection, we trained the workers to complete a small number of dialogues by giving them feedback. Finally, 90 well-trained workers are participating in the data collection.
In contrast, MultiWOZ BIBREF12 hired more than a thousand workers to converse asynchronously. Each worker received a dialogue context to review and need to respond for only one turn at a time. The collected dialogues may be incoherent because workers may not understand the context correctly and multiple workers contributed to the same dialogue session, possibly leading to more variance in the data quality. For example, some workers expressed two mutually exclusive constraints in two consecutive user turns and failed to eliminate the system's confusion in the next several turns. Compared with MultiWOZ, our synchronous conversation setting may produce more coherent dialogues.
<<<User Side>>>
The user state is the same as the user goal before a conversation starts. At each turn, the user needs to 1) modify the user state according to the system response at the preceding turn, 2) select some semantic tuples in the user state, which indicates the dialogue acts, and 3) compose the utterance according to the selected semantic tuples. In addition to filling the required values and updating cross-domain informable slots with real values in the user state, the user is encouraged to modify the constraints when there is no result under such constraints. The change will also be recorded in the user state. Once the goal is completed (all the values in the user state are filled), the user can terminate the dialogue.
<<</User Side>>>
<<<Wizard Side>>>
We regard the database query as the system state, which records the constraints of each domain till the current turn. At each turn, the wizard needs to 1) fill the query according to the previous user response and search the database if necessary, 2) select the retrieved entities, and 3) respond in natural language based on the information of the selected entities. If none of the entities satisfy all the constraints, the wizard will try to relax some of them for a recommendation, resulting in multiple queries. The first query records original user constraints while the last one records the constraints relaxed by the system.
<<</Wizard Side>>>
<<</Dialogue Collection>>>
<<<Dialogue Annotation>>>
After collecting the conversation data, we used some rules to annotate dialogue acts automatically. Each utterance can have several dialogue acts. Each dialogue act is a tuple that consists of intent, domain, slot, and value. We pre-define 6 types of intents and use the update of the user state and system state as well as keyword matching to obtain dialogue acts. For the user side, dialogue acts are mainly derived from the selection of semantic tuples that contain the information of domain, slot, and value. For example, if (1, Attraction, fee, free) in Table TABREF13 is selected by the user, then (Inform, Attraction, fee, free) is labelled. If (1, Attraction, name, ) is selected, then (Request, Attraction, name, none) is labelled. If (2, Hotel, name, near (id=1)) is selected, then (Select, Hotel, src_domain, Attraction) is labelled. This intent is specially designed for the "nearby" constraint. For the system side, we mainly applied keyword matching to label dialogue acts. Inform intent is derived by matching the system utterance with the information of selected entities. When the wizard selects multiple retrieved entities and recommend them, Recommend intent is labeled. When the wizard expresses that no result satisfies user constraints, NoOffer is labeled. For General intents such as "goodbye", "thanks" at both user and system sides, keyword matching is applied.
We also obtained a binary label for each semantic tuple in the user state, which indicates whether this semantic tuple has been selected to be expressed by the user. This annotation directly illustrates the progress of the conversation.
To evaluate the quality of the annotation of dialogue acts and states (both user and system states), three experts were employed to manually annotate dialogue acts and states for the same 50 dialogues (806 utterances), 10 for each goal type (see Section SECREF4). Since dialogue act annotation is not a classification problem, we didn't use Fleiss' kappa to measure the agreement among experts. We used dialogue act F1 and state accuracy to measure the agreement between each two experts' annotations. The average dialogue act F1 is 94.59% and the average state accuracy is 93.55%. We then compared our annotations with each expert's annotations which are regarded as gold standard. The average dialogue act F1 is 95.36% and the average state accuracy is 94.95%, which indicates the high quality of our annotations.
<<</Dialogue Annotation>>>
<<</Data Collection>>>
<<<Statistics>>>
After removing uncompleted dialogues, we collected 6,012 dialogues in total. The dataset is split randomly for training/validation/test, where the statistics are shown in Table TABREF25. The average number of sub-goals in our dataset is 3.24, which is much larger than that in MultiWOZ (1.80) BIBREF12 and Schema (1.84) BIBREF13. The average number of turns (16.9) is also larger than that in MultiWOZ (13.7). These statistics indicate that our dialogue data are more complex.
According to the type of user goal, we group the dialogues in the training set into five categories:
417 dialogues have only one sub-goal in HAR domains.
1573 dialogues have multiple sub-goals (2$\sim $3) in HAR domains. However, these sub-goals do not have cross-domain informable slots.
691 dialogues have multiple sub-goals in HAR domains and at least one sub-goal in the metro or taxi domain (3$\sim $5 sub-goals). The sub-goals in HAR domains do not have cross-domain informable slots.
1,759 dialogues have multiple sub-goals (2$\sim $5) in HAR domains with cross-domain informable slots.
572 dialogues have multiple sub-goals in HAR domains with cross-domain informable slots and at least one sub-goal in the metro or taxi domain (3$\sim $5 sub-goals).
The data statistics are shown in Table TABREF26. As mentioned in Section SECREF14, we generate independent multi-domain, cross multi-domain, and traffic domain sub-goals one by one. Thus in terms of the task complexity, we have S<M<CM and M<M+T<CM+T, which is supported by the average number of sub-goals, semantic tuples, and turns per dialogue in Table TABREF26. The average number of tokens also becomes larger when the goal becomes more complex. About 60% of dialogues (M+T, CM, and CM+T) have cross-domain informable slots. Because of the limit of maximal sub-goals number, the ratio of dialogue number of CM+T to CM is smaller than that of M+T to M.
CM and CM+T are much more challenging than other tasks because additional cross-domain constraints in HAR domains are strict and will result in more "NoOffer" situations (i.e., the wizard finds no result that satisfies the current constraints). In this situation, the wizard will try to relax some constraints and issue multiple queries to find some results for a recommendation while the user will compromise and change the original goal. The negotiation process is captured by "NoOffer rate", "Multi-query rate", and "Goal change rate" in Table TABREF26. In addition, "Multi-query rate" suggests that each sub-goal in M and M+T is as easy to finish as the goal in S.
The distribution of dialogue length is shown in Figure FIGREF27, which is an indicator of the task complexity. Most single-domain dialogues terminate within 10 turns. The curves of M and M+T are almost of the same shape, which implies that the traffic task requires two additional turns on average to complete the task. The curves of CM and CM+T are less similar. This is probably because CM goals that have 5 sub-goals (about 22%) can not further generate a sub-goal in traffic domains and become CM+T goals.
<<</Statistics>>>
<<<Corpus Features>>>
Our corpus is unique in the following aspects:
Complex user goals are designed to favor inter-domain dependency and natural transition between multiple domains. In return, the collected dialogues are more complex and natural for cross-domain dialogue tasks.
A well-controlled, synchronous setting is applied to collect human-to-human dialogues. This ensures the high quality of the collected dialogues.
Explicit annotations are provided at not only the system side but also the user side. This feature allows us to model user behaviors or develop user simulators more easily.
<<</Corpus Features>>>
<<<Benchmark and Analysis>>>
CrossWOZ can be used in different tasks or settings of a task-oriented dialogue system. To facilitate further research, we provided benchmark models for different components of a pipelined task-oriented dialogue system (Figure FIGREF32), including natural language understanding (NLU), dialogue state tracking (DST), dialogue policy learning, and natural language generation (NLG). These models are implemented using ConvLab-2 BIBREF21, an open-source task-oriented dialog system toolkit. We also provided a rule-based user simulator, which can be used to train dialogue policy and generate simulated dialogue data. The benchmark models and simulator will greatly facilitate researchers to compare and evaluate their models on our corpus.
<<<Natural Language Understanding>>>
Task: The natural language understanding component in a task-oriented dialogue system takes an utterance as input and outputs the corresponding semantic representation, namely, a dialogue act. The task can be divided into two sub-tasks: intent classification that decides the intent type of an utterance, and slot tagging which identifies the value of a slot.
Model: We adapted BERTNLU from ConvLab-2. BERT BIBREF22 has shown strong performance in many NLP tasks. We use Chinese pre-trained BERT BIBREF23 for initialization and then fine-tune the parameters on CrossWOZ. We obtain word embeddings and the sentence representation (embedding of [CLS]) from BERT. Since there may exist more than one intent in an utterance, we modify the traditional method accordingly. For dialogue acts of inform and recommend intents such as (intent=Inform, domain=Attraction, slot=fee, value=free) whose values appear in the sentence, we perform sequential labeling using an MLP which takes word embeddings ("free") as input and outputs tags in BIO schema ("B-Inform-Attraction-fee"). For each of the other dialogue acts (e.g., (intent=Request, domain=Attraction, slot=fee)) that do not have actual values, we use another MLP to perform binary classification on the sentence representation to predict whether the sentence should be labeled with this dialogue act. To incorporate context information, we use the same BERT to get the embedding of last three utterances. We separate the utterances with [SEP] tokens and insert a [CLS] token at the beginning. Then each original input of the two MLP is concatenated with the context embedding (embedding of [CLS]), serving as the new input. We also conducted an ablation test by removing context information. We trained models with both system-side and user-side utterances.
Result Analysis: The results of the dialogue act prediction (F1 score) are shown in Table TABREF31. We further tested the performance on different intent types, as shown in Table TABREF35. In general, BERTNLU performs well with context information. The performance on cross multi-domain dialogues (CM and CM+T) drops slightly, which may be due to the decrease of "General" intent and the increase of "NoOffer" as well as "Select" intent in the dialogue data. We also noted that the F1 score of "Select" intent is remarkably lower than those of other types, but context information can improve the performance significantly. Since recognizing domain transition is a key factor for a cross-domain dialogue system, natural language understanding models need to utilize context information more effectively.
<<</Natural Language Understanding>>>
<<<Dialogue State Tracking>>>
Task: Dialogue state tracking is responsible for recognizing user goals from the dialogue context and then encoding the goals into the pre-defined system state. Traditional state tracking models take as input user dialogue acts parsed by natural language understanding modules, while recently there are joint models obtaining the system state directly from the context.
Model: We implemented a rule-based model (RuleDST) and adapted TRADE (Transferable Dialogue State Generator) BIBREF19 in this experiment. RuleDST takes as input the previous system state and the last user dialogue acts. Then, the system state is updated according to hand-crafted rules. For example, If one of user dialogue acts is (intent=Inform, domain=Attraction, slot=fee, value=free), then the value of the "fee" slot in the attraction domain will be filled with "free". TRADE generates the system state directly from all the previous utterances using a copy mechanism. As mentioned in Section SECREF18, the first query of the system often records full user constraints, while the last one records relaxed constraints for recommendation. Thus the last one involves system policy, which is out of the scope of state tracking. We used the first query for these models and left state tracking with recommendation for future work.
Result Analysis: We evaluated the joint state accuracy (percentage of exact matching) of these two models (Table TABREF31). TRADE, the state-of-the-art model on MultiWOZ, performs poorly on our dataset, indicating that more powerful state trackers are necessary. At the test stage, RuleDST can access the previous gold system state and user dialogue acts, which leads to higher joint state accuracy than TRADE. Both models perform worse on cross multi-domain dialogues (CM and CM+T). To evaluate the ability of modeling cross-domain transition, we further calculated joint state accuracy for those turns that receive "Select" intent from users (e.g., "Find a hotel near the attraction"). The performances are 11.6% and 12.0% for RuleDST and TRADE respectively, showing that they are not able to track domain transition well.
<<</Dialogue State Tracking>>>
<<<Dialogue Policy Learning>>>
Task: Dialogue policy receives state $s$ and outputs system action $a$ at each turn. Compared with the state given by a dialogue state tracker, $s$ may have more information, such as the last user dialogue acts and the entities provided by the backend database.
Model: We adapted a vanilla policy trained in a supervised fashion from ConvLab-2 (SL policy). The state $s$ consists of the last system dialogue acts, last user dialogue acts, system state of the current turn, the number of entities that satisfy the constraints in the current domain, and a terminal signal indicating whether the user goal is completed. The action $a$ is delexicalized dialogue acts of current turn which ignores the exact values of the slots, where the values will be filled back after prediction.
Result Analysis: As illustrated in Table TABREF31, there is a large gap between F1 score of exact dialogue act and F1 score of delexicalized dialogue act, which means we need a powerful system state tracker to find correct entities. The result also shows that cross multi-domain dialogues (CM and CM+T) are harder for system dialogue act prediction. Additionally, when there is "Select" intent in preceding user dialogue acts, the F1 score of exact dialogue act and delexicalized dialogue act are 41.53% and 54.39% respectively. This shows that the policy performs poorly for cross-domain transition.
<<</Dialogue Policy Learning>>>
<<<Natural Language Generation>>>
Task: Natural language generation transforms a structured dialogue act into a natural language sentence. It usually takes delexicalized dialogue acts as input and generates a template-style sentence that contains placeholders for slots. Then, the placeholders will be replaced by the exact values, which is called lexicalization.
Model: We provided a template-based model (named TemplateNLG) and SC-LSTM (Semantically Conditioned LSTM) BIBREF1 for natural language generation. For TemplateNLG, we extracted templates from the training set and manually added some templates for infrequent dialogue acts. For SC-LSTM we adapted the implementation on MultiWOZ and trained two SC-LSTM with system-side and user-side utterances respectively.
Result Analysis: We calculated corpus-level BLEU as used by BIBREF1. We took all utterances with the same delexcalized dialogue acts as references (100 references on average), which results in high BLEU score. For user-side utterances, the BLEU score for TemplateNLG is 0.5780, while the BLEU score for SC-LSTM is 0.7858. For system-side, the two scores are 0.6828 and 0.8595. As exemplified in Table TABREF39, the gap between the two models can be attributed to that SC-LSTM generates common pattern while TemplateNLG retrieves original sentence which has more specific information. We do not provide BLEU scores for different goal types (namely, S, M, CM, etc.) because BLEU scores on different corpus are not comparable.
<<</Natural Language Generation>>>
<<<User Simulator>>>
Task: A user simulator imitates the behavior of users, which is useful for dialogue policy learning and automatic evaluation. A user simulator at dialogue act level (e.g., the "Usr Policy" in Figure FIGREF32) receives the system dialogue acts and outputs user dialogue acts, while a user simulator at natural language level (e.g., the left part in Figure FIGREF32) directly takes system's utterance as input and outputs user's utterance.
Model: We built a rule-based user simulator that works at dialogue act level. Different from agenda-based BIBREF24 user simulator that maintains a stack-like agenda, our simulator maintains the user state straightforwardly (Section SECREF17). The simulator will generate a user goal as described in Section SECREF14. At each user turn, the simulator receives system dialogue acts, modifies its state, and outputs user dialogue acts according to some hand-crafted rules. For example, if the system inform the simulator that the attraction is free, then the simulator will fill the "fee" slot in the user state with "free", and ask for the next empty slot such as "address". The simulator terminates when all requestable slots are filled, and all cross-domain informable slots are filled by real values.
Result Analysis: During the evaluation, we initialized the user state of the simulator using the previous gold user state. The input to the simulator is the gold system dialogue acts. We used joint state accuracy (percentage of exact matching) to evaluate user state prediction and F1 score to evaluate the prediction of user dialogue acts. The results are presented in Table TABREF31. We can observe that the performance on complex dialogues (CM and CM+T) is remarkably lower than that on simple ones (S, M, and M+T). This simple rule-based simulator is provided to facilitate dialogue policy learning and automatic evaluation, and our corpus supports the development of more elaborated simulators as we provide the annotation of user-side dialogue states and dialogue acts.
<<</User Simulator>>>
<<<Evaluation with User Simulation>>>
In addition to corpus-based evaluation for each module, we also evaluated the performance of a whole dialogue system using the user simulator as described above. Three configurations were explored:
Simulation at dialogue act level. As shown by the dashed connections in Figure FIGREF32, we used the aforementioned simulator at the user side and assembled the dialogue system with RuleDST and SL policy.
Simulation at natural language level using TemplateNLG. As shown by the solid connections in Figure FIGREF32, the simulator and the dialogue system were equipped with BERTNLU and TemplateNLG additionally.
Simulation at natural language level using SC-LSTM. TemplateNLG was replaced with SC-LSTM in the second configuration.
When all the slots in a user goal are filled by real values, the simulator terminates. This is regarded as "task finish". It's worth noting that "task finish" does not mean the task is success, because the system may provide wrong information. We calculated "task finish rate" on 1000 times simulations for each goal type (See Table TABREF31). Findings are summarized below:
Cross multi-domain tasks (CM and CM+T) are much harder to finish. Comparing M and M+T, although each module performs well in traffic domains, additional sub-goals in these domains are still difficult to accomplish.
The system-level performance is largely limited by RuleDST and SL policy. Although the corpus-based performance of NLU and NLG modules is high, the two modules still harm the performance. Thus more powerful models are needed for all components of a pipelined dialogue system.
TemplateNLG has a much lower BLEU score but performs better than SC-LSTM in natural language level simulation. This may be attributed to that BERTNLU prefers templates retrieved from the training set.
<<</Evaluation with User Simulation>>>
<<</Benchmark and Analysis>>>
<<<Conclusion>>>
In this paper, we present the first large-scale Chinese Cross-Domain task-oriented dialogue dataset, CrossWOZ. It contains 6K dialogues and 102K utterances for 5 domains, with the annotation of dialogue states and dialogue acts at both user and system sides. About 60% of the dialogues have cross-domain user goals, which encourage natural transition between related domains. Thanks to the rich annotation of dialogue states and dialogue acts at both user side and system side, this corpus provides a new testbed for a wide range of tasks to investigate cross-domain dialogue modeling, such as dialogue state tracking, policy learning, etc. Our experiments show that the cross-domain constraints are challenging for all these tasks. The transition between related domains is especially challenging to model. Besides corpus-based component-wise evaluation, we also performed system-level evaluation with a user simulator, which requires more powerful models for all components of a pipelined cross-domain dialogue system.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nData Collection\nDatabase Construction\nGoal Generation\nDialogue Collection\nUser Side\nWizard Side\nDialogue Annotation\nStatistics\nCorpus Features\nBenchmark and Analysis\nNatural Language Understanding\nDialogue State Tracking\nDialogue Policy Learning\nNatural Language Generation\nUser Simulator\nEvaluation with User Simulation\nConclusion"
],
"type": "outline"
}
|
1909.02764
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Towards Multimodal Emotion Recognition in German Speech Events in Cars using Transfer Learning
<<<Abstract>>>
The recognition of emotions by humans is a complex process which considers multiple interacting signals such as facial expressions and both prosody and semantic content of utterances. Commonly, research on automatic recognition of emotions is, with few exceptions, limited to one modality. We describe an in-car experiment for emotion recognition from speech interactions for three modalities: the audio signal of a spoken interaction, the visual signal of the driver's face, and the manually transcribed content of utterances of the driver. We use off-the-shelf tools for emotion detection in audio and face and compare that to a neural transfer learning approach for emotion recognition from text which utilizes existing resources from other domains. We see that transfer learning enables models based on out-of-domain corpora to perform well. This method contributes up to 10 percentage points in F1, with up to 76 micro-average F1 across the emotions joy, annoyance and insecurity. Our findings also indicate that off-the-shelf-tools analyzing face and audio are not ready yet for emotion detection in in-car speech interactions without further adjustments.
<<</Abstract>>>
<<<Introduction>>>
Automatic emotion recognition is commonly understood as the task of assigning an emotion to a predefined instance, for example an utterance (as audio signal), an image (for instance with a depicted face), or a textual unit (e.g., a transcribed utterance, a sentence, or a Tweet). The set of emotions is often following the original definition by Ekman Ekman1992, which includes anger, fear, disgust, sadness, joy, and surprise, or the extension by Plutchik Plutchik1980 who adds trust and anticipation.
Most work in emotion detection is limited to one modality. Exceptions include Busso2004 and Sebe2005, who investigate multimodal approaches combining speech with facial information. Emotion recognition in speech can utilize semantic features as well BIBREF0. Note that the term “multimodal” is also used beyond the combination of vision, audio, and text. For example, Soleymani2012 use it to refer to the combination of electroencephalogram, pupillary response and gaze distance.
In this paper, we deal with the specific situation of car environments as a testbed for multimodal emotion recognition. This is an interesting environment since it is, to some degree, a controlled environment: Dialogue partners are limited in movement, the degrees of freedom for occurring events are limited, and several sensors which are useful for emotion recognition are already integrated in this setting. More specifically, we focus on emotion recognition from speech events in a dialogue with a human partner and with an intelligent agent.
Also from the application point of view, the domain is a relevant choice: Past research has shown that emotional intelligence is beneficial for human computer interaction. Properly processing emotions in interactions increases the engagement of users and can improve performance when a specific task is to be fulfilled BIBREF1, BIBREF2, BIBREF3, BIBREF4. This is mostly based on the aspect that machines communicating with humans appear to be more trustworthy when they show empathy and are perceived as being natural BIBREF3, BIBREF5, BIBREF4.
Virtual agents play an increasingly important role in the automotive context and the speech modality is increasingly being used in cars due to its potential to limit distraction. It has been shown that adapting the in-car speech interaction system according to the drivers' emotional state can help to enhance security, performance as well as the overall driving experience BIBREF6, BIBREF7.
With this paper, we investigate how each of the three considered modalitites, namely facial expressions, utterances of a driver as an audio signal, and transcribed text contributes to the task of emotion recognition in in-car speech interactions. We focus on the five emotions of joy, insecurity, annoyance, relaxation, and boredom since terms corresponding to so-called fundamental emotions like fear have been shown to be associated to too strong emotional states than being appropriate for the in-car context BIBREF8. Our first contribution is the description of the experimental setup for our data collection. Aiming to provoke specific emotions with situations which can occur in real-world driving scenarios and to induce speech interactions, the study was conducted in a driving simulator. Based on the collected data, we provide baseline predictions with off-the-shelf tools for face and speech emotion recognition and compare them to a neural network-based approach for emotion recognition from text. Our second contribution is the introduction of transfer learning to adapt models trained on established out-of-domain corpora to our use case. We work on German language, therefore the transfer consists of a domain and a language transfer.
<<</Introduction>>>
<<<Related Work>>>
<<<Facial Expressions>>>
A common approach to encode emotions for facial expressions is the facial action coding system FACS BIBREF9, BIBREF10, BIBREF11. As the reliability and reproducability of findings with this method have been critically discussed BIBREF12, the trend has increasingly shifted to perform the recognition directly on images and videos, especially with deep learning. For instance, jung2015joint developed a model which considers temporal geometry features and temporal appearance features from image sequences. kim2016hierarchical propose an ensemble of convolutional neural networks which outperforms isolated networks.
In the automotive domain, FACS is still popular. Ma2017 use support vector machines to distinguish happy, bothered, confused, and concentrated based on data from a natural driving environment. They found that bothered and confused are difficult to distinguish, while happy and concentrated are well identified. Aiming to reduce computational cost, Tews2011 apply a simple feature extraction using four dots in the face defining three facial areas. They analyze the variance of the three facial areas for the recognition of happy, anger and neutral. Ihme2018 aim at detecting frustration in a simulator environment. They induce the emotion with specific scenarios and a demanding secondary task and are able to associate specific face movements according to FACS. Paschero2012 use OpenCV (https://opencv.org/) to detect the eyes and the mouth region and track facial movements. They simulate different lightning conditions and apply a multilayer perceptron for the classification task of Ekman's set of fundamental emotions.
Overall, we found that studies using facial features usually focus on continuous driver monitoring, often in driver-only scenarios. In contrast, our work investigates the potential of emotion recognition during speech interactions.
<<</Facial Expressions>>>
<<<Acoustic>>>
Past research on emotion recognition from acoustics mainly concentrates on either feature selection or the development of appropriate classifiers. rao2013emotion as well as ververidis2004automatic compare local and global features in support vector machines. Next to such discriminative approaches, hidden Markov models are well-studied, however, there is no agreement on which feature-based classifier is most suitable BIBREF13. Similar to the facial expression modality, recent efforts on applying deep learning have been increased for acoustic speech processing. For instance, lee2015high use a recurrent neural network and palaz2015analysis apply a convolutional neural network to the raw speech signal. Neumann2017 as well as Trigeorgis2016 analyze the importance of features in the context of deep learning-based emotion recognition.
In the automotive sector, Boril2011 approach the detection of negative emotional states within interactions between driver and co-driver as well as in calls of the driver towards the automated spoken dialogue system. Using real-world driving data, they find that the combination of acoustic features and their respective Gaussian mixture model scores performs best. Schuller2006 collects 2,000 dialog turns directed towards an automotive user interface and investigate the classification of anger, confusion, and neutral. They show that automatic feature generation and feature selection boost the performance of an SVM-based classifier. Further, they analyze the performance under systematically added noise and develop methods to mitigate negative effects. For more details, we refer the reader to the survey by Schuller2018. In this work, we explore the straight-forward application of domain independent software to an in-car scenario without domain-specific adaptations.
<<</Acoustic>>>
<<<Text>>>
Previous work on emotion analysis in natural language processing focuses either on resource creation or on emotion classification for a specific task and domain. On the side of resource creation, the early and influential work of Pennebaker2015 is a dictionary of words being associated with different psychologically relevant categories, including a subset of emotions. Another popular resource is the NRC dictionary by Mohammad2012b. It contains more than 10000 words for a set of discrete emotion classes. Other resources include WordNet Affect BIBREF14 which distinguishes particular word classes. Further, annotated corpora have been created for a set of different domains, for instance fairy tales BIBREF15, Blogs BIBREF16, Twitter BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, Facebook BIBREF22, news headlines BIBREF23, dialogues BIBREF24, literature BIBREF25, or self reports on emotion events BIBREF26 (see BIBREF27 for an overview).
To automatically assign emotions to textual units, the application of dictionaries has been a popular approach and still is, particularly in domains without annotated corpora. Another approach to overcome the lack of huge amounts of annotated training data in a particular domain or for a specific topic is to exploit distant supervision: use the signal of occurrences of emoticons or specific hashtags or words to automatically label the data. This is sometimes referred to as self-labeling BIBREF21, BIBREF28, BIBREF29, BIBREF30.
A variety of classification approaches have been tested, including SNoW BIBREF15, support vector machines BIBREF16, maximum entropy classification, long short-term memory network, and convolutional neural network models BIBREF18. More recently, the state of the art is the use of transfer learning from noisy annotations to more specific predictions BIBREF29. Still, it has been shown that transferring from one domain to another is challenging, as the way emotions are expressed varies between areas BIBREF27. The approach by Felbo2017 is different to our work as they use a huge noisy data set for pretraining the model while we use small high quality data sets instead.
Recently, the state of the art has also been pushed forward with a set of shared tasks, in which the participants with top results mostly exploit deep learning methods for prediction based on pretrained structures like embeddings or language models BIBREF21, BIBREF31, BIBREF20.
Our work follows this approach and builds up on embeddings with deep learning. Furthermore, we approach the application and adaption of text-based classifiers to the automotive domain with transfer learning.
<<</Text>>>
<<</Related Work>>>
<<<Data set Collection>>>
The first contribution of this paper is the construction of the AMMER data set which we describe in the following. We focus on the drivers' interactions with both a virtual agent as well as a co-driver. To collect the data in a safe and controlled environment and to be able to consider a variety of predefined driving situations, the study was conducted in a driving simulator.
<<<Study Setup and Design>>>
The study environment consists of a fixed-base driving simulator running Vires's VTD (Virtual Test Drive, v2.2.0) simulation software (https://vires.com/vtd-vires-virtual-test-drive/). The vehicle has an automatic transmission, a steering wheel and gas and brake pedals. We collect data from video, speech and biosignals (Empatica E4 to record heart rate, electrodermal activity, skin temperature, not further used in this paper) and questionnaires. Two RGB cameras are fixed in the vehicle to capture the drivers face, one at the sun shield above the drivers seat and one in the middle of the dashboard. A microphone is placed on the center console. One experimenter sits next to the driver, the other behind the simulator. The virtual agent accompanying the drive is realized as Wizard-of-Oz prototype which enables the experimenter to manually trigger prerecorded voice samples playing trough the in-car speakers and to bring new content to the center screen. Figure FIGREF4 shows the driving simulator.
The experimental setting is comparable to an everyday driving task. Participants are told that the goal of the study is to evaluate and to improve an intelligent driving assistant. To increase the probability of emotions to arise, participants are instructed to reach the destination of the route as fast as possible while following traffic rules and speed limits. They are informed that the time needed for the task would be compared to other participants. The route comprises highways, rural roads, and city streets. A navigation system with voice commands and information on the screen keeps the participants on the predefined track.
To trigger emotion changes in the participant, we use the following events: (i) a car on the right lane cutting off to the left lane when participants try to overtake followed by trucks blocking both lanes with a slow overtaking maneuver (ii) a skateboarder who appears unexpectedly on the street and (iii) participants are praised for reaching the destination unexpectedly quickly in comparison to previous participants.
Based on these events, we trigger three interactions (Table TABREF6 provides examples) with the intelligent agent (Driver-Agent Interactions, D–A). Pretending to be aware of the current situation, e. g., to recognize unusual driving behavior such as strong braking, the agent asks the driver to explain his subjective perception of these events in detail. Additionally, we trigger two more interactions with the intelligent agent at the beginning and at the end of the drive, where participants are asked to describe their mood and thoughts regarding the (upcoming) drive. This results in five interactions between the driver and the virtual agent.
Furthermore, the co-driver asks three different questions during sessions with light traffic and low cognitive demand (Driver-Co-Driver Interactions, D–Co). These questions are more general and non-traffic-related and aim at triggering the participants' memory and fantasy. Participants are asked to describe their last vacation, their dream house and their idea of the perfect job. In sum, there are eight interactions per participant (5 D–A, 3 D–Co).
<<</Study Setup and Design>>>
<<<Procedure>>>
At the beginning of the study, participants were welcomed and the upcoming study procedure was explained. Subsequently, participants signed a consent form and completed a questionnaire to provide demographic information. After that, the co-driving experimenter started with the instruction in the simulator which was followed by a familiarization drive consisting of highway and city driving and covering different driving maneuvers such as tight corners, lane changing and strong braking. Subsequently, participants started with the main driving task. The drive had a duration of 20 minutes containing the eight previously mentioned speech interactions. After the completion of the drive, the actual goal of improving automatic emotional recognition was revealed and a standard emotional intelligence questionnaire, namely the TEIQue-SF BIBREF32, was handed to the participants. Finally, a retrospective interview was conducted, in which participants were played recordings of their in-car interactions and asked to give discrete (annoyance, insecurity, joy, relaxation, boredom, none, following BIBREF8) was well as dimensional (valence, arousal, dominance BIBREF33 on a 11-point scale) emotion ratings for the interactions and the according situations. We only use the discrete class annotations in this paper.
<<</Procedure>>>
<<<Data Analysis>>>
Overall, 36 participants aged 18 to 64 years ($\mu $=28.89, $\sigma $=12.58) completed the experiment. This leads to 288 interactions, 180 between driver and the agent and 108 between driver and co-driver. The emotion self-ratings from the participants yielded 90 utterances labeled with joy, 26 with annoyance, 49 with insecurity, 9 with boredom, 111 with relaxation and 3 with no emotion. One example interaction per interaction type and emotion is shown in Table TABREF7. For further experiments, we only use joy, annoyance/anger, and insecurity/fear due to the small sample size for boredom and no emotion and under the assumption that relaxation brings little expressivity.
<<</Data Analysis>>>
<<</Data set Collection>>>
<<<Methods>>>
<<<Emotion Recognition from Facial Expressions>>>
We preprocess the visual data by extracting the sequence of images for each interaction from the point where the agent's or the co-driver's question was completely uttered until the driver's response stops. The average length is 16.3 seconds, with the minimum at 2.2s and the maximum at 54.7s. We apply an off-the-shelf tool for emotion recognition (the manufacturer cannot be disclosed due to licensing restrictions). It delivers frame-by-frame scores ($\in [0;100]$) for discrete emotional states of joy, anger and fear. While joy corresponds directly to our annotation, we map anger to our label annoyance and fear to our label insecurity. The maximal average score across all frames constitutes the overall classification for the video sequence. Frames where the software is not able to detect the face are ignored.
<<</Emotion Recognition from Facial Expressions>>>
<<<Emotion Recognition from Audio Signal>>>
We extract the audio signal for the same sequence as described for facial expressions and apply an off-the-shelf tool for emotion recognition. The software delivers single classification scores for a set of 24 discrete emotions for the entire utterance. We consider the outputs for the states of joy, anger, and fear, mapping analogously to our classes as for facial expressions. Low-confidence predictions are interpreted as “no emotion”. We accept the emotion with the highest score as the discrete prediction otherwise.
<<</Emotion Recognition from Audio Signal>>>
<<<Emotion Recognition from Transcribed Utterances>>>
For the emotion recognition from text, we manually transcribe all utterances of our AMMER study. To exploit existing and available data sets which are larger than the AMMER data set, we develop a transfer learning approach. We use a neural network with an embedding layer (frozen weights, pre-trained on Common Crawl and Wikipedia BIBREF36), a bidirectional LSTM BIBREF37, and two dense layers followed by a soft max output layer. This setup is inspired by BIBREF38. We use a dropout rate of 0.3 in all layers and optimize with Adam BIBREF39 with a learning rate of $10^{-5}$ (These parameters are the same for all further experiments). We build on top of the Keras library with the TensorFlow backend. We consider this setup our baseline model.
We train models on a variety of corpora, namely the common format published by BIBREF27 of the FigureEight (formally known as Crowdflower) data set of social media, the ISEAR data BIBREF40 (self-reported emotional events), and, the Twitter Emotion Corpus (TEC, weakly annotated Tweets with #anger, #disgust, #fear, #happy, #sadness, and #surprise, Mohammad2012). From all corpora, we use instances with labels fear, anger, or joy. These corpora are English, however, we do predictions on German utterances. Therefore, each corpus is preprocessed to German with Google Translate. We remove URLs, user tags (“@Username”), punctuation and hash signs. The distributions of the data sets are shown in Table TABREF12.
To adapt models trained on these data, we apply transfer learning as follows: The model is first trained until convergence on one out-of-domain corpus (only on classes fear, joy, anger for compatibility reasons). Then, the parameters of the bi-LSTM layer are frozen and the remaining layers are further trained on AMMER. This procedure is illustrated in Figure FIGREF13
<<</Emotion Recognition from Transcribed Utterances>>>
<<</Methods>>>
<<<Results>>>
<<<Facial Expressions and Audio>>>
Table TABREF16 shows the confusion matrices for facial and audio emotion recognition on our complete AMMER data set and Table TABREF17 shows the results per class for each method, including facial and audio data and micro and macro averages. The classification from facial expressions yields a macro-averaged $\text{F}_1$ score of 33 % across the three emotions joy, insecurity, and annoyance (P=0.31, R=0.35). While the classification results for joy are promising (R=43 %, P=57 %), the distinction of insecurity and annoyance from the other classes appears to be more challenging.
Regarding the audio signal, we observe a macro $\text{F}_1$ score of 29 % (P=42 %, R=22 %). There is a bias towards negative emotions, which results in a small number of detected joy predictions (R=4 %). Insecurity and annoyance are frequently confused.
<<</Facial Expressions and Audio>>>
<<<Text from Transcribed Utterances>>>
The experimental setting for the evaluation of emotion recognition from text is as follows: We evaluate the BiLSTM model in three different experiments: (1) in-domain, (2) out-of-domain and (3) transfer learning. For all experiments we train on the classes anger/annoyance, fear/insecurity and joy. Table TABREF19 shows all results for the comparison of these experimental settings.
<<<Experiment 1: In-Domain application>>>
We first set a baseline by validating our models on established corpora. We train the baseline model on 60 % of each data set listed in Table TABREF12 and evaluate that model with 40 % of the data from the same domain (results shown in the column “In-Domain” in Table TABREF19). Excluding AMMER, we achieve an average micro $\text{F}_1$ of 68 %, with best results of F$_1$=73 % on TEC. The model trained on our AMMER corpus achieves an F1 score of 57%. This is most probably due to the small size of this data set and the class bias towards joy, which makes up more than half of the data set. These results are mostly in line with Bostan2018.
<<</Experiment 1: In-Domain application>>>
<<<Experiment 2: Simple Out-Of-Domain application>>>
Now we analyze how well the models trained in Experiment 1 perform when applied to our data set. The results are shown in column “Simple” in Table TABREF19. We observe a clear drop in performance, with an average of F$_1$=48 %. The best performing model is again the one trained on TEC, en par with the one trained on the Figure8 data. The model trained on ISEAR performs second best in Experiment 1, it performs worst in Experiment 2.
<<</Experiment 2: Simple Out-Of-Domain application>>>
<<<Experiment 3: Transfer Learning application>>>
To adapt models trained on previously existing data sets to our particular application, the AMMER corpus, we apply transfer learning. Here, we perform leave-one-out cross validation. As pre-trained models we use each model from Experiment 1 and further optimize with the training subset of each crossvalidation iteration of AMMER. The results are shown in the column “Transfer L.” in Table TABREF19. The confusion matrix is also depicted in Table TABREF16.
With this procedure we achieve an average performance of F$_1$=75 %, being better than the results from the in-domain Experiment 1. The best performance of F$_1$=76 % is achieved with the model pre-trained on each data set, except for ISEAR. All transfer learning models clearly outperform their simple out-of-domain counterpart.
To ensure that this performance increase is not only due to the larger data set, we compare these results to training the model without transfer on a corpus consisting of each corpus together with AMMER (again, in leave-one-out crossvalidation). These results are depicted in column “Joint C.”. Thus, both settings, “transfer learning” and “joint corpus” have access to the same information.
The results show an increase in performance in contrast to not using AMMER for training, however, the transfer approach based on partial retraining the model shows a clear improvement for all models (by 7pp for Figure8, 10pp for EmoInt, 8pp for TEC, 13pp for ISEAR) compared to the ”Joint” setup.
<<</Experiment 3: Transfer Learning application>>>
<<</Text from Transcribed Utterances>>>
<<</Results>>>
<<<Summary & Future Work>>>
We described the creation of the multimodal AMMER data with emotional speech interactions between a driver and both a virtual agent and a co-driver. We analyzed the modalities of facial expressions, acoustics, and transcribed utterances regarding their potential for emotion recognition during in-car speech interactions. We applied off-the-shelf emotion recognition tools for facial expressions and acoustics. For transcribed text, we developed a neural network-based classifier with transfer learning exploiting existing annotated corpora. We find that analyzing transcribed utterances is most promising for classification of the three emotional states of joy, annoyance and insecurity.
Our results for facial expressions indicate that there is potential for the classification of joy, however, the states of annoyance and insecurity are not well recognized. Future work needs to investigate more sophisticated approaches to map frame predictions to sequence predictions. Furthermore, movements of the mouth region during speech interactions might negatively influence the classification from facial expressions. Therefore, the question remains how facial expressions can best contribute to multimodal detection in speech interactions.
Regarding the classification from the acoustic signal, the application of off-the-shelf classifiers without further adjustments seems to be challenging. We find a strong bias towards negative emotional states for our experimental setting. For instance, the personalization of the recognition algorithm (e. g., mean and standard deviation normalization) could help to adapt the classification for specific speakers and thus to reduce this bias. Further, the acoustic environment in the vehicle interior has special properties and the recognition software might need further adaptations.
Our transfer learning-based text classifier shows considerably better results. This is a substantial result in its own, as only one previous method for transfer learning in emotion recognition has been proposed, in which a sentiment/emotion specific source for labels in pre-training has been used, to the best of our knowledge BIBREF29. Other applications of transfer learning from general language models include BIBREF41, BIBREF42. Our approach is substantially different, not being trained on a huge amount of noisy data, but on smaller out-of-domain sets of higher quality. This result suggests that emotion classification systems which work across domains can be developed with reasonable effort.
For a productive application of emotion detection in the context of speech events we conclude that a deployed system might perform best with a speech-to-text module followed by an analysis of the text. Further, in this work, we did not explore an ensemble model or the interaction of different modalities. Thus, future work should investigate the fusion of multiple modalities in a single classifier.
<<</Summary & Future Work>>>
<<<Acknowledgment>>>
We thank Laura-Ana-Maria Bostan for discussions and data set preparations. This research has partially been funded by the German Research Council (DFG), project SEAT (KL 2869/1-1).
<<</Acknowledgment>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nFacial Expressions\nAcoustic\nText\nData set Collection\nStudy Setup and Design\nProcedure\nData Analysis\nMethods\nEmotion Recognition from Facial Expressions\nEmotion Recognition from Audio Signal\nEmotion Recognition from Transcribed Utterances\nResults\nFacial Expressions and Audio\nText from Transcribed Utterances\nExperiment 1: In-Domain application\nExperiment 2: Simple Out-Of-Domain application\nExperiment 3: Transfer Learning application\nSummary & Future Work\nAcknowledgment"
],
"type": "outline"
}
|
1912.01252
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Facilitating on-line opinion dynamics by mining expressions of causation. The case of climate change debates on The Guardian
<<<Abstract>>>
News website comment sections are spaces where potentially conflicting opinions and beliefs are voiced. Addressing questions of how to study such cultural and societal conflicts through technological means, the present article critically examines possibilities and limitations of machine-guided exploration and potential facilitation of on-line opinion dynamics. These investigations are guided by a discussion of an experimental observatory for mining and analyzing opinions from climate change-related user comments on news articles from the this http URL. This observatory combines causal mapping methods with computational text analysis in order to mine beliefs and visualize opinion landscapes based on expressions of causation. By (1) introducing digital methods and open infrastructures for data exploration and analysis and (2) engaging in debates about the implications of such methods and infrastructures, notably in terms of the leap from opinion observation to debate facilitation, the article aims to make a practical and theoretical contribution to the study of opinion dynamics and conflict in new media environments.
<<</Abstract>>>
<<<Introduction>>>
<<<Background>>>
Over the past two decades, the rise of social media and the digitization of news and discussion platforms have radically transformed how individuals and groups create, process and share news and information. As Alan Rusbridger, former-editor-in-chief of the newspaper The Guardian has it, these technologically-driven shifts in the ways people communicate, organize themselves and express their beliefs and opinions, have
empower[ed] those that were never heard, creating a a new form of politics and turning traditional news corporations inside out. It is impossible to think of Donald Trump; of Brexit; of Bernie Sanders; of Podemos; of the growth of the far right in Europe; of the spasms of hope and violent despair in the Middle East and North Africa without thinking also of the total inversion of how news is created, shared and distributed. Much of it is liberating and and inspiring. Some of it is ugly and dark. And something - the centuries-old craft of journalism - is in danger of being lost BIBREF0.
Rusbridger's observation that the present media-ecology puts traditional notions of politics, journalism, trust and truth at stake is a widely shared one BIBREF1, BIBREF2, BIBREF3. As such, it has sparked interdisciplinary investigations, diagnoses and ideas for remedies across the economical, socio-political, and technological spectrum, challenging our existing assumptions and epistemologies BIBREF4, BIBREF5. Among these lines of inquiry, particular strands of research from the computational social sciences are addressing pressing questions of how emerging technologies and digital methods might be operationalized to regain a grip on the dynamics that govern the flow of on-line news and its associated multitudes of voices, opinions and conflicts. Could the information circulating on on-line (social) news platforms for instance be mined to better understand and analyze the problems facing our contemporary society? Might such data mining and analysis help us to monitor the growing number of social conflicts and crises due to cultural differences and diverging world-views? And finally, would such an approach potentially facilitate early detection of conflicts and even ways to resolve them before they turn violent?
Answering these questions requires further advances in the study of cultural conflict based on digital media data. This includes the development of fine-grained representations of cultural conflict based on theoretically-informed text analysis, the integration of game-theoretical approaches to models of polarization and alignment, as well as the construction of accessible tools and media-monitoring observatories: platforms that foster insight into the complexities of social behaviour and opinion dynamics through automated computational analyses of (social) media data. Through an interdisciplinary approach, the present article aims to make both a practical and theoretical contribution to these aspects of the study of opinion dynamics and conflict in new media environments.
<<</Background>>>
<<<Objective>>>
The objective of the present article is to critically examine possibilities and limitations of machine-guided exploration and potential facilitation of on-line opinion dynamics on the basis of an experimental data analytics pipeline or observatory for mining and analyzing climate change-related user comments from the news website of The Guardian (TheGuardian.com). Combining insights from the social and political sciences with computational methods for the linguistic analysis of texts, this observatory provides a series of spatial (network) representations of the opinion landscapes on climate change on the basis of causation frames expressed in news website comments. This allows for the exploration of opinion spaces at different levels of detail and aggregation.
Technical and theoretical questions related to the proposed method and infrastructure for the exploration and facilitation of debates will be discussed in three sections. The first section concerns notions of how to define what constitutes a belief or opinion and how these can be mined from texts. To this end, an approach based on the automated extraction of semantic frames expressing causation is proposed. The observatory thus builds on the theoretical premise that expressions of causation such as `global warming causes rises in sea levels' can be revelatory for a person or group's underlying belief systems. Through a further technical description of the observatory's data-analytical components, section two of the paper deals with matters of spatially modelling the output of the semantic frame extractor and how this might be achieved without sacrificing nuances of meaning. The final section of the paper, then, discusses how insights gained from technologically observing opinion dynamics can inform conceptual modelling efforts and approaches to on-line opinion facilitation. As such, the paper brings into view and critically evaluates the fundamental conceptual leap from machine-guided observation to debate facilitation and intervention.
Through the case examples from The Guardian's website and the theoretical discussions explored in these sections, the paper intends to make a twofold contribution to the fields of media studies, opinion dynamics and computational social science. Firstly, the paper introduces and chains together a number of data analytics components for social media monitoring (and facilitation) that were developed in the context of the <project name anonymized for review> infrastructure project. The <project name anonymized for review> infrastructure makes the components discussed in this paper available as open web services in order to foster reproducibility and further experimentation and development <infrastructure reference URL anonymized for review>. Secondly, and supplementing these technological and methodological gains, the paper addresses a number of theoretical, epistemological and ethical questions that are raised by experimental approaches to opinion exploration and facilitation. This notably includes methodological questions on the preservation of meaning through text and data mining, as well as the role of human interpretation, responsibility and incentivisation in observing and potentially facilitating opinion dynamics.
<<</Objective>>>
<<<Data: the communicative setting of TheGuardian.com>>>
In order to study on-line opinion dynamics and build the corresponding climate change opinion observatory discussed in this paper, a corpus of climate-change related news articles and news website comments was analyzed. Concretely, articles from the ‘climate change’ subsection from the news website of The Guardian dated from 2009 up to April 2019 were processed, along with up to 200 comments and associated metadata for articles where commenting was enabled at the time of publication. The choice for studying opinion dynamics using data from The Guardian is motivated by this news website's prominent position in the media landscape as well as its communicative setting, which is geared towards user engagement. Through this interaction with readers, the news platform embodies many of the recent shifts that characterize our present-day media ecology.
TheGuardian.com is generally acknowledged to be one of the UK's leading online newspapers, with 8,2 million unique visitors per month as of May 2013 BIBREF6. The website consists of a core news site, as well as a range of subsections that allow for further classification and navigation of articles. Articles related to climate change can for instance be accessed by navigating through the `News' section, over the subsection `environment', to the subsubsection `climate change' BIBREF7. All articles on the website can be read free of charge, as The Guardian relies on a business model that combines revenues from advertising, voluntary donations and paid subscriptions.
Apart from offering high-quality, independent journalism on a range of topics, a distinguishing characteristic of The Guardian is its penchant for reader involvement and engagement. Adopting to the changing media landscape and appropriating business models that fit the transition from print to on-line news media, the Guardian has transformed itself into a platform that enables forms of citizen journalism, blogging, and welcomes readers comments on news articles BIBREF0. In order for a reader to comment on articles, it is required that a user account is made, which provides a user with a unique user name and a user profile page with a stable URL. According to the website's help pages, providing users with an identity that is consistently recognized by the community fosters proper on-line community behaviour BIBREF8. Registered users can post comments on content that is open to commenting, and these comments are moderated by a dedicated moderation team according to The Guardian's community standards and participation guidelines BIBREF9. In support of digital methods and innovative approaches to journalism and data mining, The Guardian has launched an open API (application programming interface) through which developers can access different types of content BIBREF10. It should be noted that at the moment of writing this article, readers' comments are not accessible through this API. For the scientific and educational purposes of this paper, comments were thus consulted using a dedicated scraper.
Taking into account this community and technologically-driven orientation, the communicative setting of The Guardian from which opinions are to be mined and the underlying belief system revealed, is defined by articles, participating commenters and comment spheres (that is, the actual comments aggregated by user, individual article or collection of articles) (see Figure FIGREF4).
In this setting, articles (and previous comments on those articles) can be commented on by participating commenters, each of which bring to the debate his or her own opinions or belief system. What this belief system might consists of can be inferred on a number of levels, with varying degrees of precision. On the most general level, a generic description of the profile of the average reader of The Guardian can be informative. Such profiles have been compiled by market researchers with the purpose of informing advertisers about the demographic that might be reached through this news website (and other products carrying The Guardian's brand). As of the writing of this article, the audience The Guardian is presented to advertisers as a `progressive' audience:
Living in a world of unprecedented societal change, with the public narratives around politics, gender, body image, sexuality and diet all being challenged. The Guardian is committed to reflecting the progressive agenda, and reaching the crowd that uphold those values. It’s helpful that we reach over half of progressives in the UK BIBREF11.
A second, equally high-level indicator of the beliefs that might be present on the platform, are the links through which articles on climate change can be accessed. An article on climate change might for instance be consulted through the environment section of the news website, but also through the business section. Assuming that business interests might potentially be at odds with environmental concerns, it could be hypothesized that the particular comment sphere for that article consists of at least two potentially clashing frames of mind or belief systems.
However, as will be expanded upon further in this article, truly capturing opinion dynamics requires a more systemic and fine-grained approach. The present article therefore proposes a method for harvesting opinions from the actual comment texts. The presupposition is thereby that comment spheres are marked by a diversity of potentially related opinions and beliefs. Opinions might for instance be connected through the reply structure that marks the comment section of an article, but this connection might also manifest itself on a semantic level (that is, the level of meaning or the actual contents of the comments). To capture this multidimensional, interconnected nature of the comment spheres, it is proposed to represent comment spheres as networks, where the nodes represent opinions and beliefs, and edges the relationships between these beliefs (see the spatial representation of beliefs infra). The use of precision language tools to extract such beliefs and their mutual relationships, as will be explored in the following sections, can open up new pathways of model validation and creation.
<<</Data: the communicative setting of TheGuardian.com>>>
<<</Introduction>>>
<<<Mining opinions and beliefs from texts>>>
In traditional experimental settings, survey techniques and associated statistical models provide researchers with established methods to gauge and analyze the opinions of a population. When studying opinion landscapes through on-line social media, however, harvesting beliefs from big textual data such as news website comments and developing or appropriating models for their analysis is a non-trivial task BIBREF12, BIBREF13, BIBREF14.
In the present context, two challenges related to data-gathering and text mining need to be addressed: (1) defining what constitutes an expression of an opinion or belief, and (2) associating this definition with a pattern that might be extracted from texts. Recent scholarship in the fields of natural language processing (NLP) and argumentation mining has yielded a range of instruments and methods for the (automatic) identification of argumentative claims in texts BIBREF15, BIBREF16. Adding to these instruments and methods, the present article proposes an approach in which belief systems or opinions on climate change are accessed through expressions of causation.
<<<Causal mapping methods and the climate change debate>>>
The climate change debate is often characterized by expressions of causation, that is, expressions linking a certain cause with a certain effect. Cultural or societal clashes on climate change might for instance concern diverging assessments of whether global warming is man-made or not BIBREF17. Based on such examples, it can be stated that expressions of causation are closely associated with opinions or beliefs, and that as such, these expressions can be considered a valuable indicator for the range and diversity of the opinions and beliefs that constitute the climate change debate. The observatory under discussion therefore focuses on the extraction and analysis of linguistic patterns called causation frames. As will be further demonstrated in this section, the benefit of this causation-based approach is that it offers a systemic approach to opinion dynamics that comprises different layers of meaning, notably the cognitive or social meaningfulness of patterns on account of their being expressions of causation, as well as further lexical and semantic information that might be used for analysis and comparison.
The study of expressions of causation as a method for accessing and assessing belief systems and opinions has been formalized and streamlined since the 1970s. Pioneered by political scientist Robert Axelrod and others, this causal mapping method (also referred to as `cognitive mapping') was introduced as a means of reconstructing and evaluating administrative and political decision-making processes, based on the principle that
the notion of causation is vital to the process of evaluating alternatives. Regardless of philosophical difficulties involved in the meaning of causation, people do evaluate complex policy alternatives in terms of the consequences a particular choice would cause, and ultimately of what the sum of these effects would be. Indeed, such causal analysis is built into our language, and it would be very difficult for us to think completely in other terms, even if we tried BIBREF18.
Axelrod's causal mapping method comprises a set of conventions to graphically represent networks of causes and effects (the nodes in a network) as well as the qualitative aspects of this relation (the network’s directed edges, notably assertions of whether the causal linkage is positive or negative). These causes and effects are to be extracted from relevant sources by means of a series of heuristics and an encoding scheme (it should be noted that for this task Axelrod had human readers in mind). The graphs resulting from these efforts provide a structural overview of the relations among causal assertions (and thus beliefs):
The basic elements of the proposed system are quite simple. The concepts a person uses are represented as points, and the causal links between these concepts are represented as arrows between these points. This gives a pictorial representation of the causal assertions of a person as a graph of points and arrows. This kind of representation of assertions as a graph will be called a cognitive map. The policy alternatives, all of the various causes and effects, the goals, and the ultimate utility of the decision maker can all be thought of as concept variables, and represented as points in the cognitive map. The real power of this approach appears when a cognitive map is pictured in graph form; it is then relatively easy to see how each of the concepts and causal relationships relate to each other, and to see the overall structure of the whole set of portrayed assertions BIBREF18.
In order to construct these cognitive maps based on textual information, Margaret Tucker Wrightson provides a set of reading and coding rules for extracting cause concepts, linkages (relations) and effect concepts from expressions in the English language. The assertion `Our present topic is the militarism of Germany, which is maintaining a state of tension in the Baltic Area' might for instance be encoded as follows: `the militarism of Germany' (cause concept), /+/ (a positive relationship), `maintaining a state of tension in the Baltic area' (effect concept) BIBREF19. Emphasizing the role of human interpretation, it is acknowledged that no strict set of rules can capture the entire spectrum of causal assertions:
The fact that the English language is as varied as those who use it makes the coder's task complex and difficult. No set of rules will completely solve the problems he or she might encounter. These rules, however, provide the coder with guidelines which, if conscientiously followed, will result in outcomes meeting social scientific standards of comparative validity and reliability BIBREF19.
To facilitate the task of encoders, the causal mapping method has gone through various iterations since its original inception, all the while preserving its original premises. Recent software packages have for instance been devised to support the data encoding and drawing process BIBREF20. As such, causal or cognitive mapping has become an established opinion and decision mining method within political science, business and management, and other domains. It has notably proven to be a valuable method for the study of recent societal and cultural conflicts. Thomas Homer-Dixon et al. for instance rely on cognitive-affective maps created from survey data to analyze interpretations of the housing crisis in Germany, Israeli attitudes toward the Western Wall, and moderate versus skeptical positions on climate change BIBREF21. Similarly, Duncan Shaw et al. venture to answer the question of `Why did Brexit happen?' by building causal maps of nine televised debates that were broadcast during the four weeks leading up to the Brexit referendum BIBREF22.
In order to appropriate the method of causal mapping to the study of on-line opinion dynamics, it needs to expanded from applications at the scale of human readers and relatively small corpora of archival documents and survey answers, to the realm of `big' textual data and larger quantities of information. This attuning of cognitive mapping methods to the large-scale processing of texts required for media monitoring necessarily involves a degree of automation, as will be explored in the next section.
<<</Causal mapping methods and the climate change debate>>>
<<<Automated causation tracking with the Penelope semantic frame extractor>>>
As outlined in the previous section, causal mapping is based on the extraction of so-called cause concepts, (causal) relations, and effect concepts from texts. The complexity of each of these these concepts can range from the relatively simple (as illustrated by the easily-identifiable cause and effect relation in the example of `German militarism' cited earlier), to more complex assertions such as `The development of international cooperation in all fields across the ideological frontiers will gradually remove the hostility and fear that poison international relations', which contains two effect concepts (viz. `the hostility that poisons international relations' and `the fear that poisons international relations'). As such, this statement would have to be encoded as a double relationship BIBREF19.
The coding guidelines in BIBREF19 further reflect that extracting cause and effect concepts from texts is an operation that works on both the syntactical and semantic levels of assertions. This can be illustrated by means of the guidelines for analyzing the aforementioned causal assertion on German militarism:
1. The first step is the realization of the relationship. Does a subject affect an object? 2. Having recognized that it does, the isolation of the cause and effects concepts is the second step. As the sentence structure indicates, "the militarism of Germany" is the causal concept, because it is the initiator of the action, while the direct object clause, "a state of tension in the Baltic area," constitutes that which is somehow influenced, the effect concept BIBREF19.
In the field of computational linguistics, from which the present paper borrows part of its methods, this procedure for extracting information related to causal assertions from texts can be considered an instance of an operation called semantic frame extraction BIBREF23. A semantic frame captures a coherent part of the meaning of a sentence in a structured way. As documented in the FrameNet project BIBREF24, the Causation frame is defined as follows:
A Cause causes an Effect. Alternatively, an Actor, a participant of a (implicit) Cause, may stand in for the Cause. The entity Affected by the Causation may stand in for the overall Effect situation or event BIBREF25.
In a linguistic utterance such as a statement in a news website comment, the Causation frame can be evoked by a series of lexical units, such as `cause', `bring on', etc. In the example `If such a small earthquake CAUSES problems, just imagine a big one!', the Causation frame is triggered by the verb `causes', which therefore is called the frame evoking element. The Cause slot is filled by `a small earthquake', the Effect slot by `problems' BIBREF25.
In order to automatically mine cause and effects concepts from the corpus of comments on The Guardian, the present paper uses the Penelope semantic frame extractor: a tool that exploits the fact that semantic frames can be expressed as form-meaning mappings called constructions. Notably, frames were extracted from Guardian comments by focusing on the following lexical units (verbs, prepositions and conjunctions), listed in FrameNet as frame evoking elements of the Causation frame: Cause.v, Due to.prep, Because.c, Because of.prep, Give rise to.v, Lead to.v or Result in.v.
As illustrated by the following examples, the strings output by the semantic frame extractor adhere closely to the original utterance, preserving all of the the comments' causation frames real-world noisiness:
The output of the semantic frame extractor as such is used as the input for the ensuing pipeline components in the climate change opinion observatory. The aim of a further analysis of these frames is to find patterns in the beliefs and opinions they express. As will be discussed in the following section, which focuses on applications and cases, maintaining semantic nuances in this further analytic process foregrounds the role of models and aggregation levels.
<<</Automated causation tracking with the Penelope semantic frame extractor>>>
<<</Mining opinions and beliefs from texts>>>
<<<Analyses and applications>>>
Based on the presupposition that relations between causation frames reveal beliefs, the output of the semantic frame extractor creates various opportunities for exploring opinion landscapes and empirically validating conceptual models for opinion dynamics.
In general, any alignment of conceptual models and real-world data is an exercise in compromising, as the idealized, abstract nature of models is likely to be at odds with the messiness of the actual data. Finding such a compromise might for instance involve a reduction of the simplicity or elegance of the model, or, on the other hand, an increased aggregation (and thus reduced granularity) of the data.
Addressing this challenge, the current section reflects on questions of data modelling, aggregation and meaning by exploring, through case examples, different spatial representations of opinion landscapes mined from the TheGuardian.com's comment sphere. These spatial renditions will be understood as network visualizations in which nodes represent argumentative statements (beliefs) and edges the degree of similarity between these statements. On the most general level, then, such a representation can consists of an overview of all the causes expressed in the corpus of climate change-related Guardian comments. This type of visualization provides a birds-eye view of the entire opinion landscape as mined from the comment texts. In turn, such a general overview might elicit more fine-grained, micro-level investigations, in which a particular cause is singled out and its more specific associated effects are mapped. These macro and micro level overviews come with their own proper potential for theory building and evaluation, as well as distinct requirements for the depth or detail of meaning that needs to be represented. To get the most general sense of an opinion landscape one might for instance be more tolerant of abstract renditions of beliefs (e.g. by reducing statements to their most frequently used terms), but for more fine-grained analysis one requires more context and nuance (e.g. adhering as closely as possible to the original comment).
<<<Aggregation>>>
As follows from the above, one of the most fundamental questions when building automated tools to observe opinion dynamics that potentially aim at advising means of debate facilitation concerns the level of meaning aggregation. A clear argumentative or causal association between, for instance, climate change and catastrophic events such as floods or hurricanes may become detectable by automatic causal frame tracking at the scale of large collections of articles where this association might appear statistically more often, but detection comes with great challenges when the aim is to classify certain sets of only a few statements in more free expression environments such as comment spheres.
In other words, the problem of meaning aggregation is closely related to issues of scale and aggregation over utterances. The more fine-grained the semantic resolution is, that is, the more specific the cause or effect is that one is interested in, the less probable it is to observe the same statement twice. Moreover, with every independent variable (such as time, different commenters or user groups, etc.), less data on which fine-grained opinion statements are to be detected is available. In the present case of parsed comments from TheGuardian.com, providing insights into the belief system of individual commenters, even if all their statements are aggregated over time, relies on a relatively small set of argumentative statements. This relative sparseness is in part due to the fact that the scope of the semantic frame extractor is confined to the frame evoking elements listed earlier, thus omitting more implicit assertions of causation (i.e. expressions of causation that can only be derived from context and from reading between the lines).
Similarly, as will be explored in the ensuing paragraphs, matters of scale and aggregation determine the types of further linguistic analyses that can be performed on the output of the frame extractor. Within the field of computational linguistics, various techniques have been developed to represent the meaning of words as vectors that capture the contexts in which these words are typically used. Such analyses might reveal patterns of statistical significance, but it is also likely that in creating novel, numerical representations of the original utterances, the semantic structure of argumentatively linked beliefs is lost.
In sum, developing opinion observatories and (potential) debate facilitators entails finding a trade-off, or, in fact, a middle way between macro- and micro-level analyses. On the one hand, one needs to leverage automated analysis methods at the scale of larger collections to maximum advantage. But one also needs to integrate opportunities to interactively zoom into specific aspects of interest and provide more fine-grained information at these levels down to the actual statements. This interplay between macro- and micro-level analyses is explored in the case studies below.
<<</Aggregation>>>
<<<Spatial renditions of TheGuardian.com's opinion landscape>>>
The main purpose of the observatory under discussion is to provide insight into the belief structures that characterize the opinion landscape on climate change. For reasons outlined above, this raises questions of how to represent opinions and, correspondingly, determining which representation is most suited as the atomic unit of comparison between opinions. In general terms, the desired outcome of further processing of the output of the semantic frame extractor is a network representation in which similar cause or effect strings are displayed in close proximity to one another. A high-level description of the pipeline under discussion thus goes as follows. In a first step, it can be decided whether one wants to map cause statements or effect statements. Next, the selected statements are grouped per commenter (i.e. a list is made of all cause statements or effect statements per commenter). These statements are filtered in order to retain only nouns, adjectives and verbs (thereby also omitting frequently occurring verbs such as ‘to be’). The remaining words are then lemmatized, that is, reduced to their dictionary forms. This output is finally translated into a network representation, whereby nodes represent (aggregated) statements, and edges express the semantic relatedness between statements (based on a set overlap whereby the number of shared lemmata are counted).
As illustrated by two spatial renditions that were created using this approach and visualized using the network analysis tool Gephi BIBREF26, the labels assigned to these nodes (lemmata, full statements, or other) can be appropriated to the scope of the analysis.
<<<A macro-level overview: causes addressed in the climate change debate>>>
Suppose one wants to get a first idea about the scope and diversity of an opinion landscape, without any preconceived notions of this landscape's structure or composition. One way of doing this would be to map all of the causes that are mentioned in comments related to articles on climate change, that is, creating an overview of all the causes that have been retrieved by the frame extractor in a single representation. Such a representation would not immediately provide the granularity to state what the beliefs or opinions in the debates actually are, but rather, it might inspire a sense of what those opinions might be about, thus pointing towards potentially interesting phenomena that might warrant closer examination.
Figure FIGREF10, a high-level overview of the opinion landscape, reveals a number of areas to which opinions and beliefs might pertain. The top-left clusters in the diagram for instance reveal opinions about the role of people and countries, whereas on the right-hand side, we find a complementary cluster that might point to beliefs concerning the influence of high or increased CO2-emissions. In between, there is a cluster on power and energy sources, reflecting the energy debate's association to both issues of human responsibility and CO2 emissions. As such, the overview can already inspire, potentially at best, some very general hypotheses about the types of opinions that figure in the climate change debate.
<<</A macro-level overview: causes addressed in the climate change debate>>>
<<<Micro-level investigations: opinions on nuclear power and global warming>>>
Based on the range of topics on which beliefs are expressed, a micro-level analysis can be conducted to reveal what those beliefs are and, for instance, whether they align or contradict each other. This can be achieved by singling out a cause of interest, and mapping out its associated effects.
As revealed by the global overview of the climate change opinion landscape, a portion of the debate concerns power and energy sources. One topic with a particularly interesting role in this debate is nuclear power. Figure FIGREF12 illustrates how a more detailed representation of opinions on this matter can be created by spatially representing all of the effects associated with causes containing the expression `nuclear power'. Again, similar beliefs (in terms of words used in the effects) are positioned closer to each other, thus facilitating the detection of clusters. Commenters on The Guardian for instance express concerns about the deaths or extinction that might be caused by this energy resource. They also voice opinions on its cleanliness, whether or not it might decrease pollution or be its own source of pollution, and how it reduces CO2-emissions in different countries.
Whereas the detailed opinion landscape on `nuclear power' is relatively limited in terms of the number of mined opinions, other topics might reveal more elaborate belief systems. This is for instance the case for the phenomenon of `global warming'. As shown in Figure FIGREF13, opinions on global warming are clustered around the idea of `increases', notably in terms of evaporation, drought, heat waves, intensity of cyclones and storms, etc. An adjacent cluster is related to `extremes', such as extreme summers and weather events, but also extreme colds.
<<</Micro-level investigations: opinions on nuclear power and global warming>>>
<<</Spatial renditions of TheGuardian.com's opinion landscape>>>
<<</Analyses and applications>>>
<<<From opinion observation to debate facilitation>>>
The observatory introduced in the preceding paragraphs provides preliminary insights into the range and scope of the beliefs that figure in climate change debates on TheGuardian.com. The observatory as such takes a distinctly descriptive stance, and aims to satisfy, at least in part, the information needs of researchers, activists, journalists and other stakeholders whose main concern is to document, investigate and understand on-line opinion dynamics. However, in the current information sphere, which is marked by polarization, misinformation and a close entanglement with real-world conflicts, taking a mere descriptive or neutral stance might not serve every stakeholder's needs. Indeed, given the often skewed relations between power and information, questions arise as to how media observations might in turn be translated into (political, social or economic) action. Knowledge about opinion dynamics might for instance inform interventions that remedy polarization or disarm conflict. In other words, the construction of (social) media observatories unavoidably lifts questions about the possibilities, limitations and, especially, implications of the machine-guided and human-incentivized facilitation of on-line discussions and debates.
Addressing these questions, the present paragraph introduces and explores the concept of a debate facilitator, that is, a device that extends the capabilities of the previously discussed observatory to also promote more interesting and constructive discussions. Concretely, we will conceptualize a device that reveals how the personal opinion landscapes of commenters relate to each other (in terms of overlap or lack thereof), and we will discuss what steps might potentially be taken on the basis of such representation to balance the debate. Geared towards possible interventions in the debate, such a device may thus go well beyond the observatory's objectives of making opinion processes and conflicts more transparent, which concomitantly raises a number of serious concerns that need to be acknowledged.
On rather fundamental ground, tools that steer debates in one way or another may easily become manipulative and dangerous instruments in the hands of certain interest groups. Various aspects of our daily lives are for instance already implicitly guided by recommender systems, the purpose and impact of which can be rather opaque. For this reason, research efforts across disciplines are directed at scrutinizing and rendering such systems more transparent BIBREF28. Such scrutiny is particularly pressing in the context of interventions on on-line communication platforms, which have already been argued to enforce affective communication styles that feed rather than resolve conflict. The objectives behind any facilitation device should therefore be made maximally transparent and potential biases should be fully acknowledged at every level, from data ingest to the dissemination of results BIBREF29. More concretely, the endeavour of constructing opinion observatories and facilitators foregrounds matters of `openness' of data and tools, security, ensuring data quality and representative sampling, accounting for evolving data legislation and policy, building communities and trust, and envisioning beneficial implications. By documenting the development process for a potential facilitation device, the present paper aims to contribute to these on-going investigations and debates. Furthermore, every effort has been made to protect the identities of the commenters involved. In the words of media and technology visionary Jaron Lanier, developers and computational social scientists entering this space should remain fundamentally aware of the fact that `digital information is really just people in disguise' BIBREF30.
With these reservations in mind, the proposed approach can be situated among ongoing efforts that lead from debate observation to facilitation. One such pathway, for instance, involves the construction of filters to detect hate speech, misinformation and other forms of expression that might render debates toxic BIBREF31, BIBREF32. Combined with community outreach, language-based filtering and detection tools have proven to raise awareness among social media users about the nature and potential implications of their on-line contributions BIBREF33. Similarly, advances can be expected from approaches that aim to extend the scope of analysis beyond descriptions of a present debate situation in order to model how a debate might evolve over time and how intentions of the participants could be included in such an analysis.
Progress in any of these areas hinges on a further integration of real-world data in the modelling process, as well as a further socio-technical and media-theoretical investigation of how activity on social media platforms and technologies correlate to real-world conflicts. The remainder of this section therefore ventures to explore how conceptual argument communication models for polarization and alignment BIBREF34 might be reconciled with real-world data, and how such models might inform debate facilitation efforts.
<<<Debate facilitation through models of alignment and polarization>>>
As discussed in previous sections, news websites like TheGuardian.com establish a communicative settings in which agents (users, commenters) exchange arguments about different issues or topics. For those seeking to establish a healthy debate, it could thus be of interest to know how different users relate to each other in terms of their beliefs about a certain issue or topic (in this case climate change). Which beliefs are for instance shared by users and which ones are not? In other words, can we map patterns of alignment or polarization among users?
Figure FIGREF15 ventures to demonstrate how representations of opinion landscapes (generated using the methods outlined above) can be enriched with user information to answer such questions. Specifically, the graph represents the beliefs of two among the most active commenters in the corpus. The opinions of each user are marked using a colour coding scheme: red nodes represent the beliefs of the first user, blue nodes represent the beliefs of the second user. Nodes with a green colour represent beliefs that are shared by both users.
Taking into account again the factors of aggregation that were discussed in the previous section, Figure FIGREF15 supports some preliminary observations about the relationship between the two users in terms of their beliefs. Generally, given the fact that the graph concerns the two most active commenters on the website, it can be seen that the rendered opinion landscape is quite extensive. It is also clear that the belief systems of both users are not unrelated, as nodes of all colours can be found distributed throughout the graph. This is especially the case for the right-hand top cluster and right-hand bottom cluster of the graph, where green, red, and blue nodes are mixed. Since both users are discussing on articles on climate change, a degree of affinity between opinions or beliefs is to be expected.
Upon closer examination, a number of disparities between the belief systems of the two commenters can be detected. Considering the left-hand top cluster and center of the graph, it becomes clear that exclusively the red commenter is using a selection of terms related to the economical and socio-political realm (e.g. `people', `american', `nation', `government') and industry (e.g. `fuel', `industry', `car', etc.). The blue commenter, on the other hand, exclusively engages in using a range of terms that could be deemed more technical and scientific in nature (e.g. `feedback', `property', `output', `trend', `variability', etc.). From the graph, it also follows that the blue commenter does not enter into the red commenter's `social' segments of the graph as frequently as the red commenter enters the more scientifically-oriented clusters of the graph (although in the latter cases the red commenter does not use the specific technical terminology of the blue commenter). The cluster where both beliefs mingle the most (and where overlap can be observed), is the top right cluster. This overlap is constituted by very general terms (e.g. `climate', `change', and `science'). In sum, the graph reveals that the commenters' beliefs are positioned most closely to each other on the most general aspects of the debate, whereas there is less relatedness on the social and more technical aspects of the debate. In this regard, the depicted situation seemingly evokes currently on-going debates about the role or responsibilities of the people or individuals versus that of experts when it comes to climate change BIBREF35, BIBREF36, BIBREF37.
What forms of debate facilitation, then, could be based on these observations? And what kind of collective effects can be expected? As follows from the above, beliefs expressed by the two commenters shown here (which are selected based on their active participation rather than actual engagement or dialogue with one another) are to some extent complementary, as the blue commenter, who displays a scientifically-oriented system of beliefs, does not readily engage with the social topics discussed by the red commenter. As such, the overall opinion landscape of the climate change could potentially be enriched with novel perspectives if the blue commenter was invited to engage in a debate about such topics as industry and government. Similarly, one could explore the possibility of providing explanatory tools or additional references on occasions where the debate takes a more technical turn.
However, argument-based models of collective attitude formation BIBREF38, BIBREF34 also tell us to be cautious about such potential interventions. Following the theory underlying these models, different opinion groups prevailing during different periods of a debate will activate different argumentative associations. Facilitating exchange between users with complementary arguments supporting similar opinions may enforce biased argument pools BIBREF39 and lead to increasing polarization at the collective level. In the example considered here the two commenters agree on the general topic, but the analysis suggests that they might have different opinions about the adequate direction of specific climate change action. A more fine–grained automatic detection of cognitive and evaluative associations between arguments and opinions is needed for a reliable use of models to predict what would come out of facilitating exchange between two specific users. In this regard, computational approaches to the linguistic analysis of texts such as semantic frame extraction offer productive opportunities for empirically modelling opinion dynamics. Extraction of causation frames allows one to disentangle cause-effect relations between semantic units, which provides a productive step towards mapping and measuring structures of cognitive associations. These opportunities are to be explored by future work.
<<</Debate facilitation through models of alignment and polarization>>>
<<</From opinion observation to debate facilitation>>>
<<<Conclusion>>>
Ongoing transitions from a print-based media ecology to on-line news and discussion platforms have put traditional forms of news production and consumption at stake. Many challenges related to how information is currently produced and consumed come to a head in news website comment sections, which harbour the potential of providing new insights into how cultural conflicts emerge and evolve. On the basis of an observatory for analyzing climate change-related comments from TheGuardian.com, this article has critically examined possibilities and limitations of the machine-assisted exploration and possible facilitation of on-line opinion dynamics and debates.
Beyond technical and modelling pathways, this examination brings into view broader methodological and epistemological aspects of the use of digital methods to capture and study the flow of on-line information and opinions. Notably, the proposed approaches lift questions of computational analysis and interpretation that can be tied to an overarching tension between `distant' and `close reading' BIBREF40. In other words, monitoring on-line opinion dynamics means embracing the challenges and associated trade-offs that come with investigating large quantities of information through computational, text-analytical means, but doing this in such a way that nuance and meaning are not lost in the process.
Establishing productive cross-overs between the level of opinions mined at scale (for instance through the lens of causation frames) and the detailed, closer looks at specific conversations, interactions and contexts depends on a series of preliminaries. One of these is the continued availability of high-quality, accessible data. As the current on-line media ecology is recovering from recent privacy-related scandals (e.g. Cambridge Analytica), such data for obvious reasons is not always easy to come by. In the same legal and ethical vein, reproducibility and transparency of models is crucial to the further development of analytical tools and methods. As the experiments discussed in this paper have revealed, a key factor in this undertaking are human faculties of interpretation. Just like the encoding schemes introduced by Axelrod and others before the wide-spread use of computational methods, present-day pipelines and tools foreground the role of human agents as the primary source of meaning attribution.
<This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 732942 (Opinion Dynamics and Cultural Conflict in European Spaces – www.Odycceus.eu).>
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nBackground\nObjective\nData: the communicative setting of TheGuardian.com\nMining opinions and beliefs from texts\nCausal mapping methods and the climate change debate\nAutomated causation tracking with the Penelope semantic frame extractor\nAnalyses and applications\nAggregation\nSpatial renditions of TheGuardian.com's opinion landscape\nA macro-level overview: causes addressed in the climate change debate\nMicro-level investigations: opinions on nuclear power and global warming\nFrom opinion observation to debate facilitation\nDebate facilitation through models of alignment and polarization\nConclusion"
],
"type": "outline"
}
|
1909.00578
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
SUM-QE: a BERT-based Summary Quality Estimation Model
<<<Abstract>>>
We propose SumQE, a novel Quality Estimation model for summarization based on BERT. The model addresses linguistic quality aspects that are only indirectly captured by content-based approaches to summary evaluation, without involving comparison with human references. SumQE achieves very high correlations with human ratings, outperforming simpler models addressing these linguistic aspects. Predictions of the SumQE model can be used for system development, and to inform users of the quality of automatically produced summaries and other types of generated text.
<<</Abstract>>>
<<<Introduction>>>
Quality Estimation (QE) is a term used in machine translation (MT) to refer to methods that measure the quality of automatically translated text without relying on human references BIBREF0, BIBREF1. In this study, we address QE for summarization. Our proposed model, Sum-QE, successfully predicts linguistic qualities of summaries that traditional evaluation metrics fail to capture BIBREF2, BIBREF3, BIBREF4, BIBREF5. Sum-QE predictions can be used for system development, to inform users of the quality of automatically produced summaries and other types of generated text, and to select the best among summaries output by multiple systems.
Sum-QE relies on the BERT language representation model BIBREF6. We use a pre-trained BERT model adding just a task-specific layer, and fine-tune the entire model on the task of predicting linguistic quality scores manually assigned to summaries. The five criteria addressed are given in Figure FIGREF2. We provide a thorough evaluation on three publicly available summarization datasets from NIST shared tasks, and compare the performance of our model to a wide variety of baseline methods capturing different aspects of linguistic quality. Sum-QE achieves very high correlations with human ratings, showing the ability of BERT to model linguistic qualities that relate to both text content and form.
<<</Introduction>>>
<<<Related Work>>>
Summarization evaluation metrics like Pyramid BIBREF5 and ROUGE BIBREF3, BIBREF2 are recall-oriented; they basically measure the content from a model (reference) summary that is preserved in peer (system generated) summaries. Pyramid requires substantial human effort, even in its more recent versions that involve the use of word embeddings BIBREF8 and a lightweight crowdsourcing scheme BIBREF9. ROUGE is the most commonly used evaluation metric BIBREF10, BIBREF11, BIBREF12. Inspired by BLEU BIBREF4, it relies on common $n$-grams or subsequences between peer and model summaries. Many ROUGE versions are available, but it remains hard to decide which one to use BIBREF13. Being recall-based, ROUGE correlates well with Pyramid but poorly with linguistic qualities of summaries. BIBREF14 proposed a regression model for measuring summary quality without references. The scores of their model correlate well with Pyramid and Responsiveness, but text quality is only addressed indirectly.
Quality Estimation is well established in MT BIBREF15, BIBREF0, BIBREF1, BIBREF16, BIBREF17. QE methods provide a quality indicator for translation output at run-time without relying on human references, typically needed by MT evaluation metrics BIBREF4, BIBREF18. QE models for MT make use of large post-edited datasets, and apply machine learning methods to predict post-editing effort scores and quality (good/bad) labels.
We apply QE to summarization, focusing on linguistic qualities that reflect the readability and fluency of the generated texts. Since no post-edited datasets – like the ones used in MT – are available for summarization, we use instead the ratings assigned by human annotators with respect to a set of linguistic quality criteria. Our proposed models achieve high correlation with human judgments, showing that it is possible to estimate summary quality without human references.
<<</Related Work>>>
<<<Datasets>>>
We use datasets from the NIST DUC-05, DUC-06 and DUC-07 shared tasks BIBREF7, BIBREF19, BIBREF20. Given a question and a cluster of newswire documents, the contestants were asked to generate a 250-word summary answering the question. DUC-05 contains 1,600 summaries (50 questions x 32 systems); in DUC-06, 1,750 summaries are included (50 questions x 35 systems); and DUC-07 has 1,440 summaries (45 questions x 32 systems).
The submitted summaries were manually evaluated in terms of content preservation using the Pyramid score, and according to five linguistic quality criteria ($\mathcal {Q}1, \dots , \mathcal {Q}5$), described in Figure FIGREF2, that do not involve comparison with a model summary. Annotators assigned scores on a five-point scale, with 1 and 5 indicating that the summary is bad or good with respect to a specific $\mathcal {Q}$. The overall score for a contestant with respect to a specific $\mathcal {Q}$ is the average of the manual scores assigned to the summaries generated by the contestant. Note that the DUC-04 shared task involved seven $\mathcal {Q}$s, but some of them were found to be highly overlapping and were grouped into five in subsequent years BIBREF20. We address these five criteria and use DUC data from 2005 onwards in our experiments.
<<</Datasets>>>
<<<Methods>>>
<<<The Sum-QE Model>>>
In Sum-QE, each peer summary is converted into a sequence of token embeddings, consumed by an encoder $\mathcal {E}$ to produce a (dense vector) summary representation $h$. Then, a regressor $\mathcal {R}$ predicts a quality score $S_{\mathcal {Q}}$ as an affine transformation of $h$:
Non-linear regression could also be used, but a linear (affine) $\mathcal {R}$ already performs well. We use BERT as our main encoder and fine-tune it in three ways, which leads to three versions of Sum-QE.
<<<Single-task (BERT-FT-S-1):>>>
The first version of Sum-QE uses five separate estimators, one per quality score, each having its own encoder $\mathcal {E}_i$ (a separate BERT instance generating $h_i$) and regressor $\mathcal {R}_i$ (a separate linear regression layer on top of the corresponding BERT instance):
<<</Single-task (BERT-FT-S-1):>>>
<<<Multi-task with one regressor (BERT-FT-M-1):>>>
The second version of Sum-QE uses one estimator to predict all five quality scores at once, from a single encoding $h$ of the summary, produced by a single BERT instance. The intuition is that $\mathcal {E}$ will learn to create richer representations so that $\mathcal {R}$ (an affine transformation of $h$ with 5 outputs) will be able to predict all quality scores:
where $\mathcal {R}(h)[i]$ is the $i$-th element of the vector returned by $\mathcal {R}$.
<<</Multi-task with one regressor (BERT-FT-M-1):>>>
<<<Multi-task with 5 regressors (BERT-FT-M-5):>>>
The third version of Sum-QE is similar to BERT-FT-M-1, but we now use five different linear (affine) regressors, one per quality score:
Although BERT-FT-M-5 is mathematically equivalent to BERT-FT-M-1, in practice these two versions of Sum-QE produce different results because of implementation details related to how the losses of the regressors (five or one) are combined.
<<</Multi-task with 5 regressors (BERT-FT-M-5):>>>
<<</The Sum-QE Model>>>
<<<Baselines>>>
<<<BiGRU s with attention:>>>
This is very similar to Sum-QE but now $\mathcal {E}$ is a stack of BiGRU s with self-attention BIBREF21, instead of a BERT instance. The final summary representation ($h$) is the sum of the resulting context-aware token embeddings ($h = \sum _i a_i h_i$) weighted by their self-attention scores ($a_i$). We again have three flavors: one single-task (BiGRU-ATT-S-1) and two multi-task (BiGRU-ATT-M-1 and BiGRU-ATT-M-5).
<<</BiGRU s with attention:>>>
<<<ROUGE:>>>
This baseline is the ROUGE version that performs best on each dataset, among the versions considered by BIBREF13. Although ROUGE focuses on surface similarities between peer and reference summaries, we would expect properties like grammaticality, referential clarity and coherence to be captured to some extent by ROUGE versions based on long $n$-grams or longest common subsequences.
<<</ROUGE:>>>
<<<Language model (LM):>>>
For a peer summary, a reasonable estimate of $\mathcal {Q}1$ (Grammaticality) is the perplexity returned by a pre-trained language model. We experiment with the pre-trained GPT-2 model BIBREF22, and with the probability estimates that BERT can produce for each token when the token is treated as masked (BERT-FR-LM). Given that the grammaticality of a summary can be corrupted by just a few bad tokens, we compute the perplexity by considering only the $k$ worst (lowest LM probability) tokens of the peer summary, where $k$ is a tuned hyper-parameter.
<<</Language model (LM):>>>
<<<Next sentence prediction:>>>
BERT training relies on two tasks: predicting masked tokens and next sentence prediction. The latter seems to be aligned with the definitions of $\mathcal {Q}3$ (Referential Clarity), $\mathcal {Q}4$ (Focus) and $\mathcal {Q}5$ (Structure & Coherence). Intuitively, when a sentence follows another with high probability, it should involve clear referential expressions and preserve the focus and local coherence of the text. We, therefore, use a pre-trained BERT model (BERT-FR-NS) to calculate the sentence-level perplexity of each summary:
where $p(s_i|s_{i-1})$ is the probability that BERT assigns to the sequence of sentences $\left< s_{i-1}, s \right>$, and $n$ is the number of sentences in the peer summary.
<<</Next sentence prediction:>>>
<<</Baselines>>>
<<</Methods>>>
<<<Experiments>>>
To evaluate our methods for a particular $\mathcal {Q}$, we calculate the average of the predicted scores for the summaries of each particular contestant, and the average of the corresponding manual scores assigned to the contestant's summaries. We measure the correlation between the two (predicted vs. manual) across all contestants using Spearman's $\rho $, Kendall's $\tau $ and Pearson's $r$.
We train and test the Sum-QE and BiGRU-ATT versions using a 3-fold procedure. In each fold, we train on two datasets (e.g., DUC-05, DUC-06) and test on the third (e.g., DUC-07). We follow the same procedure with the three BiGRU-based models. Hyper-perameters are tuned on a held out subset from the training set of each fold.
<<</Experiments>>>
<<<Results>>>
Table TABREF23 shows Spearman's $\rho $, Kendall's $\tau $ and Pearson's $r$ for all datasets and models. The three fine-tuned BERT versions clearly outperform all other methods. Multi-task versions seem to perform better than single-task ones in most cases. Especially for $\mathcal {Q}4$ and $\mathcal {Q}5$, which are highly correlated, the multi-task BERT versions achieve the best overall results. BiGRU-ATT also benefits from multi-task learning.
The correlation of Sum-QE with human judgments is high or very high BIBREF23 for all $\mathcal {Q}$s in all datasets, apart from $\mathcal {Q}2$ in DUC-05 where it is only moderate. Manual scores for $\mathcal {Q}2$ in DUC-05 are the highest among all $\mathcal {Q}$s and years (between 4 and 5) and with the smallest standard deviation, as shown in Table TABREF24. Differences among systems are thus small in this respect, and although Sum-QE predicts scores in this range, it struggles to put them in the correct order, as illustrated in Figure FIGREF26.
BEST-ROUGE has a negative correlation with the ground-truth scores for $\mathcal {Q}$2 since it does not account for repetitions. The BiGRU-based models also reach their lowest performance on $\mathcal {Q}$2 in DUC-05. A possible reason for the higher relative performance of the BERT-based models, which achieve a moderate positive correlation, is that BiGRU captures long-distance relations less effectively than BERT, which utilizes Transformers BIBREF24 and has a larger receptive field. A possible improvement would be a stacked BiGRU, since the states of higher stack layers have a larger receptive field as well.
The BERT multi-task versions perform better with highly correlated qualities like $\mathcal {Q}4$ and $\mathcal {Q}5$ (as illustrated in Figures 2 to 4 in the supplementary material). However, there is not a clear winner among them. Mathematical equivalence does not lead to deterministic results, especially when random initialization and stochastic learning algorithms are involved. An in-depth exploration of this point would involve further investigation, which will be part of future work.
<<</Results>>>
<<<Conclusion and Future Work>>>
We propose a novel Quality Estimation model for summarization which does not require human references to estimate the quality of automatically produced summaries. Sum-QE successfully predicts qualitative aspects of summaries that recall-oriented evaluation metrics fail to approximate. Leveraging powerful BERT representations, it achieves high correlations with human scores for most linguistic qualities rated, on three different datasets. Future work involves extending the Sum-QE model to capture content-related aspects, either in combination with existing evaluation metrics (like Pyramid and ROUGE) or, preferably, by identifying important information in the original text and modelling its preservation in the proposed summaries. This would preserve Sum-QE's independence from human references, a property of central importance in real-life usage scenarios and system development settings.
The datasets used in our experiments come from the NIST DUC shared tasks which comprise newswire articles. We believe that Sum-QE could be easily applied to other domains. A small amount of annotated data would be needed for fine-tuning – especially in domains with specialized vocabulary (e.g., biomedical) – but the model could also be used out of the box. A concrete estimation of performance in this setting will be part of future work. Also, the model could serve to estimate linguistic qualities other than the ones in the DUC dataset with mininum effort.
Finally, Sum-QE could serve to assess the quality of other types of texts, not only summaries. It could thus be applied to other text generation tasks, such as natural language generation and sentence compression.
<<</Conclusion and Future Work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nDatasets\nMethods\nThe Sum-QE Model\nSingle-task (BERT-FT-S-1):\nMulti-task with one regressor (BERT-FT-M-1):\nMulti-task with 5 regressors (BERT-FT-M-5):\nBaselines\nBiGRU s with attention:\nROUGE:\nLanguage model (LM):\nNext sentence prediction:\nExperiments\nResults\nConclusion and Future Work"
],
"type": "outline"
}
|
1910.11471
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Machine Translation from Natural Language to Code using Long-Short Term Memory
<<<Abstract>>>
Making computer programming language more understandable and easy for the human is a longstanding problem. From assembly language to present day’s object-oriented programming, concepts came to make programming easier so that a programmer can focus on the logic and the architecture rather than the code and language itself. To go a step further in this journey of removing human-computer language barrier, this paper proposes machine learning approach using Recurrent Neural Network (RNN) and Long-Short Term Memory (LSTM) to convert human language into programming language code. The programmer will write expressions for codes in layman’s language, and the machine learning model will translate it to the targeted programming language. The proposed approach yields result with 74.40% accuracy. This can be further improved by incorporating additional techniques, which are also discussed in this paper.
<<</Abstract>>>
<<<Introduction>>>
Removing computer-human language barrier is an inevitable advancement researchers are thriving to achieve for decades. One of the stages of this advancement will be coding through natural human language instead of traditional programming language. On naturalness of computer programming D. Knuth said, “Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.”BIBREF0. Unfortunately, learning programming language is still necessary to instruct it. Researchers and developers are working to overcome this human-machine language barrier. Multiple branches exists to solve this challenge (i.e. inter-conversion of different programming language to have universally connected programming languages). Automatic code generation through natural language is not a new concept in computer science studies. However, it is difficult to create such tool due to these following three reasons–
Programming languages are diverse
An individual person expresses logical statements differently than other
Natural Language Processing (NLP) of programming statements is challenging since both human and programming language evolve over time
In this paper, a neural approach to translate pseudo-code or algorithm like human language expression into programming language code is proposed.
<<</Introduction>>>
<<<Problem Description>>>
Code repositories (i.e. Git, SVN) flourished in the last decade producing big data of code allowing data scientists to perform machine learning on these data. In 2017, Allamanis M et al. published a survey in which they presented the state-of-the-art of the research areas where machine learning is changing the way programmers code during software engineering and development process BIBREF1. This paper discusses what are the restricting factors of developing such text-to-code conversion method and what problems need to be solved–
<<<Programming Language Diversity>>>
According to the sources, there are more than a thousand actively maintained programming languages, which signifies the diversity of these language . These languages were created to achieve different purpose and use different syntaxes. Low-level languages such as assembly languages are easier to express in human language because of the low or no abstraction at all whereas high-level, or Object-Oriented Programing (OOP) languages are more diversified in syntax and expression, which is challenging to bring into a unified human language structure. Nonetheless, portability and transparency between different programming languages also remains a challenge and an open research area. George D. et al. tried to overcome this problem through XML mapping BIBREF2. They tried to convert codes from C++ to Java using XML mapping as an intermediate language. However, the authors encountered challenges to support different features of both languages.
<<</Programming Language Diversity>>>
<<<Human Language Factor>>>
One of the motivations behind this paper is - as long as it is about programming, there is a finite and small set of expression which is used in human vocabulary. For instance, programmers express a for-loop in a very few specific ways BIBREF3. Variable declaration and value assignment expressions are also limited in nature. Although all codes are executable, human representation through text may not due to the semantic brittleness of code. Since high-level languages have a wide range of syntax, programmers use different linguistic expressions to explain those. For instance, small changes like swapping function arguments can significantly change the meaning of the code. Hence the challenge remains in processing human language to understand it properly which brings us to the next problem-
<<</Human Language Factor>>>
<<<NLP of statements>>>
Although there is a finite set of expressions for each programming statements, it is a challenge to extract information from the statements of the code accurately. Semantic analysis of linguistic expression plays an important role in this information extraction. For instance, in case of a loop, what is the initial value? What is the step value? When will the loop terminate?
Mihalcea R. et al. has achieved a variable success rate of 70-80% in producing code just from the problem statement expressed in human natural language BIBREF3. They focused solely on the detection of step and loops in their research. Another research group from MIT, Lei et al. use a semantic learning model for text to detect the inputs. The model produces a parser in C++ which can successfully parse more than 70% of the textual description of input BIBREF4. The test dataset and model was initially tested and targeted against ACM-ICPC participantsínputs which contains diverse and sometimes complex input instructions.
A recent survey from Allamanis M. et al. presented the state-of-the-art on the area of naturalness of programming BIBREF1. A number of research works have been conducted on text-to-code or code-to-text area in recent years. In 2015, Oda et al. proposed a way to translate each line of Python code into natural language pseudocode using Statistical Machine Learning Technique (SMT) framework BIBREF5 was used. This translation framework was able to - it can successfully translate the code to natural language pseudo coded text in both English and Japanese. In the same year, Chris Q. et al. mapped natural language with simple if-this-then-that logical rules BIBREF6. Tihomir G. and Viktor K. developed an Integrated Development Environment (IDE) integrated code assistant tool anyCode for Java which can search, import and call function just by typing desired functionality through text BIBREF7. They have used model and mapping framework between function signatures and utilized resources like WordNet, Java Corpus, relational mapping to process text online and offline.
Recently in 2017, P. Yin and G. Neubig proposed a semantic parser which generates code through its neural model BIBREF8. They formulated a grammatical model which works as a skeleton for neural network training. The grammatical rules are defined based on the various generalized structure of the statements in the programming language.
<<</NLP of statements>>>
<<</Problem Description>>>
<<<Proposed Methodology>>>
The use of machine learning techniques such as SMT proved to be at most 75% successful in converting human text to executable code. BIBREF9. A programming language is just like a language with less vocabulary compared to a typical human language. For instance, the code vocabulary of the training dataset was 8814 (including variable, function, class names), whereas the English vocabulary to express the same code was 13659 in total. Here, programming language is considered just like another human language and widely used SMT techniques have been applied.
<<<Statistical Machine Translation>>>
SMT techniques are widely used in Natural Language Processing (NLP). SMT plays a significant role in translation from one language to another, especially in lexical and grammatical rule extraction. In SMT, bilingual grammatical structures are automatically formed by statistical approaches instead of explicitly providing a grammatical model. This reduces months and years of work which requires significant collaboration between bi-lingual linguistics. Here, a neural network based machine translation model is used to translate regular text into programming code.
<<<Data Preparation>>>
SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language.
<<</Data Preparation>>>
<<<Vocabulary Generation>>>
To train the neural model, the texts should be converted to a computational entity. To do that, two separate vocabulary files are created - one for the source texts and another for the code. Vocabulary generation is done by tokenization of words. Afterwards, the words are put into their contextual vector space using the popular word2vec BIBREF10 method to make the words computational.
<<</Vocabulary Generation>>>
<<<Neural Model Training>>>
In order to train the translation model between text-to-code an open source Neural Machine Translation (NMT) - OpenNMT implementation is utilized BIBREF11. PyTorch is used as Neural Network coding framework. For training, three types of Recurrent Neural Network (RNN) layers are used – an encoder layer, a decoder layer and an output layer. These layers together form a LSTM model. LSTM is typically used in seq2seq translation.
In Fig. FIGREF13, the neural model architecture is demonstrated. The diagram shows how it takes the source and target text as input and uses it for training. Vector representation of tokenized source and target text are fed into the model. Each token of the source text is passed into an encoder cell. Target text tokens are passed into a decoder cell. Encoder cells are part of the encoder RNN layer and decoder cells are part of the decoder RNN layer. End of the input sequence is marked by a $<$eos$>$ token. Upon getting the $<$eos$>$ token, the final cell state of encoder layer initiate the output layer sequence. At each target cell state, attention is applied with the encoder RNN state and combined with the current hidden state to produce the prediction of next target token. This predictions are then fed back to the target RNN. Attention mechanism helps us to overcome the fixed length restriction of encoder-decoder sequence and allows us to process variable length between input and output sequence. Attention uses encoder state and pass it to the decoder cell to give particular attention to the start of an output layer sequence. The encoder uses an initial state to tell the decoder what it is supposed to generate. Effectively, the decoder learns to generate target tokens, conditioned on the input sequence. Sigmoidal optimization is used to optimize the prediction.
<<</Neural Model Training>>>
<<</Statistical Machine Translation>>>
<<</Proposed Methodology>>>
<<<Result Analysis>>>
Training parallel corpus had 18805 lines of annotated code in it. The training model is executed several times with different training parameters. During the final training process, 500 validation data is used to generate the recurrent neural model, which is 3% of the training data. We run the training with epoch value of 10 with a batch size of 64. After finishing the training, the accuracy of the generated model using validation data from the source corpus was 74.40% (Fig. FIGREF17).
Although the generated code is incoherent and often predict wrong code token, this is expected because of the limited amount of training data. LSTM generally requires a more extensive set of data (100k+ in such scenario) to build a more accurate model. The incoherence can be resolved by incorporating coding syntax tree model in future. For instance–
"define the method tzname with 2 arguments: self and dt."
is translated into–
def __init__ ( self , regex ) :.
The translator is successfully generating the whole codeline automatically but missing the noun part (parameter and function name) part of the syntax.
<<</Result Analysis>>>
<<<Conclusion & Future Works>>>
The main advantage of translating to a programming language is - it has a concrete and strict lexical and grammatical structure which human languages lack. The aim of this paper was to make the text-to-code framework work for general purpose programming language, primarily Python. In later phase, phrase-based word embedding can be incorporated for improved vocabulary mapping. To get more accurate target code for each line, Abstract Syntax Tree(AST) can be beneficial.
The contribution of this research is a machine learning model which can turn the human expression into coding expressions. This paper also discusses available methods which convert natural language to programming language successfully in fixed or tightly bounded linguistic paradigm. Approaching this problem using machine learning will give us the opportunity to explore the possibility of unified programming interface as well in the future.
<<</Conclusion & Future Works>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nProblem Description\nProgramming Language Diversity\nHuman Language Factor\nNLP of statements\nProposed Methodology\nStatistical Machine Translation\nData Preparation\nVocabulary Generation\nNeural Model Training\nResult Analysis\nConclusion & Future Works"
],
"type": "outline"
}
|
1910.09399
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
A Survey and Taxonomy of Adversarial Neural Networks for Text-to-Image Synthesis
<<<Abstract>>>
Text-to-image synthesis refers to computational methods which translate human written textual descriptions, in the form of keywords or sentences, into images with similar semantic meaning to the text. In earlier research, image synthesis relied mainly on word to image correlation analysis combined with supervised methods to find best alignment of the visual content matching to the text. Recent progress in deep learning (DL) has brought a new set of unsupervised deep learning methods, particularly deep generative models which are able to generate realistic visual images using suitably trained neural network models. In this paper, we review the most recent development in the text-to-image synthesis research domain. Our survey first introduces image synthesis and its challenges, and then reviews key concepts such as generative adversarial networks (GANs) and deep convolutional encoder-decoder neural networks (DCNN). After that, we propose a taxonomy to summarize GAN based text-to-image synthesis into four major categories: Semantic Enhancement GANs, Resolution Enhancement GANs, Diversity Enhancement GANS, and Motion Enhancement GANs. We elaborate the main objective of each group, and further review typical GAN architectures in each group. The taxonomy and the review outline the techniques and the evolution of different approaches, and eventually provide a clear roadmap to summarize the list of contemporaneous solutions that utilize GANs and DCNNs to generate enthralling results in categories such as human faces, birds, flowers, room interiors, object reconstruction from edge maps (games) etc. The survey will conclude with a comparison of the proposed solutions, challenges that remain unresolved, and future developments in the text-to-image synthesis domain.
<<</Abstract>>>
<<<Introduction>>>
“ (GANs), and the variations that are now being proposed is the most interesting idea in the last 10 years in ML, in my opinion.” (2016)
– Yann LeCun
A picture is worth a thousand words! While written text provide efficient, effective, and concise ways for communication, visual content, such as images, is a more comprehensive, accurate, and intelligible method of information sharing and understanding. Generation of images from text descriptions, i.e. text-to-image synthesis, is a complex computer vision and machine learning problem that has seen great progress over recent years. Automatic image generation from natural language may allow users to describe visual elements through visually-rich text descriptions. The ability to do so effectively is highly desirable as it could be used in artificial intelligence applications such as computer-aided design, image editing BIBREF0, BIBREF1, game engines for the development of the next generation of video gamesBIBREF2, and pictorial art generation BIBREF3.
<<<blackTraditional Learning Based Text-to-image Synthesis>>>
In the early stages of research, text-to-image synthesis was mainly carried out through a search and supervised learning combined process BIBREF4, as shown in Figure FIGREF4. In order to connect text descriptions to images, one could use correlation between keywords (or keyphrase) & images that identifies informative and “picturable” text units; then, these units would search for the most likely image parts conditioned on the text, eventually optimizing the picture layout conditioned on both the text and the image parts. Such methods often integrated multiple artificial intelligence key components, including natural language processing, computer vision, computer graphics, and machine learning.
The major limitation of the traditional learning based text-to-image synthesis approaches is that they lack the ability to generate new image content; they can only change the characteristics of the given/training images. Alternatively, research in generative models has advanced significantly and delivers solutions to learn from training images and produce new visual content. For example, Attribute2Image BIBREF5 models each image as a composite of foreground and background. In addition, a layered generative model with disentangled latent variables is learned, using a variational auto-encoder, to generate visual content. Because the learning is customized/conditioned by given attributes, the generative models of Attribute2Image can generate images with respect to different attributes, such as gender, hair color, age, etc., as shown in Figure FIGREF5.
<<</blackTraditional Learning Based Text-to-image Synthesis>>>
<<<GAN Based Text-to-image Synthesis>>>
Although generative model based text-to-image synthesis provides much more realistic image synthesis results, the image generation is still conditioned by the limited attributes. In recent years, several papers have been published on the subject of text-to-image synthesis. Most of the contributions from these papers rely on multimodal learning approaches that include generative adversarial networks and deep convolutional decoder networks as their main drivers to generate entrancing images from text BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11.
First introduced by Ian Goodfellow et al. BIBREF9, generative adversarial networks (GANs) consist of two neural networks paired with a discriminator and a generator. These two models compete with one another, with the generator attempting to produce synthetic/fake samples that will fool the discriminator and the discriminator attempting to differentiate between real (genuine) and synthetic samples. Because GANs' adversarial training aims to cause generators to produce images similar to the real (training) images, GANs can naturally be used to generate synthetic images (image synthesis), and this process can even be customized further by using text descriptions to specify the types of images to generate, as shown in Figure FIGREF6.
Much like text-to-speech and speech-to-text conversion, there exists a wide variety of problems that text-to-image synthesis could solve in the computer vision field specifically BIBREF8, BIBREF12. Nowadays, researchers are attempting to solve a plethora of computer vision problems with the aid of deep convolutional networks, generative adversarial networks, and a combination of multiple methods, often called multimodal learning methods BIBREF8. For simplicity, multiple learning methods will be referred to as multimodal learning hereafter BIBREF13. Researchers often describe multimodal learning as a method that incorporates characteristics from several methods, algorithms, and ideas. This can include ideas from two or more learning approaches in order to create a robust implementation to solve an uncommon problem or improve a solution BIBREF8, BIBREF14, BIBREF15, BIBREF16, BIBREF17.
black In this survey, we focus primarily on reviewing recent works that aim to solve the challenge of text-to-image synthesis using generative adversarial networks (GANs). In order to provide a clear roadmap, we propose a taxonomy to summarize reviewed GANs into four major categories. Our review will elaborate the motivations of methods in each category, analyze typical models, their network architectures, and possible drawbacks for further improvement. The visual abstract of the survey and the list of reviewed GAN frameworks is shown in Figure FIGREF8.
black The remainder of the survey is organized as follows. Section 2 presents a brief summary of existing works on subjects similar to that of this paper and highlights the key distinctions making ours unique. Section 3 gives a short introduction to GANs and some preliminary concepts related to image generation, as they are the engines that make text-to-image synthesis possible and are essential building blocks to achieve photo-realistic images from text descriptions. Section 4 proposes a taxonomy to summarize GAN based text-to-image synthesis, discusses models and architectures of novel works focused solely on text-to-image synthesis. This section will also draw key contributions from these works in relation to their applications. Section 5 reviews GAN based text-to-image synthesis benchmarks, performance metrics, and comparisons, including a simple review of GANs for other applications. In section 6, we conclude with a brief summary and outline ideas for future interesting developments in the field of text-to-image synthesis.
<<</GAN Based Text-to-image Synthesis>>>
<<</Introduction>>>
<<<Related Work>>>
With the growth and success of GANs, deep convolutional decoder networks, and multimodal learning methods, these techniques were some of the first procedures which aimed to solve the challenge of image synthesis. Many engineers and scientists in computer vision and AI have contributed through extensive studies and experiments, with numerous proposals and publications detailing their contributions. Because GANs, introduced by BIBREF9, are emerging research topics, their practical applications to image synthesis are still in their infancy. Recently, many new GAN architectures and designs have been proposed to use GANs for different applications, e.g. using GANs to generate sentimental texts BIBREF18, or using GANs to transform natural images into cartoons BIBREF19.
Although GANs are becoming increasingly popular, very few survey papers currently exist to summarize and outline contemporaneous technical innovations and contributions of different GAN architectures BIBREF20, BIBREF21. Survey papers specifically attuned to analyzing different contributions to text-to-image synthesis using GANs are even more scarce. We have thus found two surveys BIBREF6, BIBREF7 on image synthesis using GANs, which are the two most closely related publications to our survey objective. In the following paragraphs, we briefly summarize each of these surveys and point out how our objectives differ from theirs.
In BIBREF6, the authors provide an overview of image synthesis using GANs. In this survey, the authors discuss the motivations for research on image synthesis and introduce some background information on the history of GANs, including a section dedicated to core concepts of GANs, namely generators, discriminators, and the min-max game analogy, and some enhancements to the original GAN model, such as conditional GANs, addition of variational auto-encoders, etc.. In this survey, we will carry out a similar review of the background knowledge because the understanding of these preliminary concepts is paramount for the rest of the paper. Three types of approaches for image generation are reviewed, including direct methods (single generator and discriminator), hierarchical methods (two or more generator-discriminator pairs, each with a different goal), and iterative methods (each generator-discriminator pair generates a gradually higher-resolution image). Following the introduction, BIBREF6 discusses methods for text-to-image and image-to-image synthesis, respectively, and also describes several evaluation metrics for synthetic images, including inception scores and Frechet Inception Distance (FID), and explains the significance of the discriminators acting as learned loss functions as opposed to fixed loss functions.
Different from the above survey, which has a relatively broad scope in GANs, our objective is heavily focused on text-to-image synthesis. Although this topic, text-to-image synthesis, has indeed been covered in BIBREF6, they did so in a much less detailed fashion, mostly listing the many different works in a time-sequential order. In comparison, we will review several representative methods in the field and outline their models and contributions in detail.
Similarly to BIBREF6, the second survey paper BIBREF7 begins with a standard introduction addressing the motivation of image synthesis and the challenges it presents followed by a section dedicated to core concepts of GANs and enhancements to the original GAN model. In addition, the paper covers the review of two types of applications: (1) unconstrained applications of image synthesis such as super-resolution, image inpainting, etc., and (2) constrained image synthesis applications, namely image-to-image, text-to-image, and sketch-to image, and also discusses image and video editing using GANs. Again, the scope of this paper is intrinsically comprehensive, while we focus specifically on text-to-image and go into more detail regarding the contributions of novel state-of-the-art models.
Other surveys have been published on related matters, mainly related to the advancements and applications of GANs BIBREF22, BIBREF23, but we have not found any prior works which focus specifically on text-to-image synthesis using GANs. To our knowledge, this is the first paper to do so.
black
<<</Related Work>>>
<<<Preliminaries and Frameworks>>>
In this section, we first introduce preliminary knowledge of GANs and one of its commonly used variants, conditional GAN (i.e. cGAN), which is the building block for many GAN based text-to-image synthesis models. After that, we briefly separate GAN based text-to-image synthesis into two types, Simple GAN frameworks vs. Advanced GAN frameworks, and discuss why advanced GAN architecture for image synthesis.
black Notice that the simple vs. advanced GAN framework separation is rather too brief, our taxonomy in the next section will propose a taxonomy to summarize advanced GAN frameworks into four categories, based on their objective and designs.
<<<Generative Adversarial Neural Network>>>
Before moving on to a discussion and analysis of works applying GANs for text-to-image synthesis, there are some preliminary concepts, enhancements of GANs, datasets, and evaluation metrics that are present in some of the works described in the next section and are thus worth introducing.
As stated previously, GANs were introduced by Ian Goodfellow et al. BIBREF9 in 2014, and consist of two deep neural networks, a generator and a discriminator, which are trained independently with conflicting goals: The generator aims to generate samples closely related to the original data distribution and fool the discriminator, while the discriminator aims to distinguish between samples from the generator model and samples from the true data distribution by calculating the probability of the sample coming from either source. A conceptual view of the generative adversarial network (GAN) architecture is shown in Figure FIGREF11.
The training of GANs is an iterative process that, with each iteration, updates the generator and the discriminator with the goal of each defeating the other. leading each model to become increasingly adept at its specific task until a threshold is reached. This is analogous to a min-max game between the two models, according to the following equation:
In Eq. (DISPLAY_FORM10), $x$ denotes a multi-dimensional sample, e.g., an image, and $z$ denotes a multi-dimensional latent space vector, e.g., a multidimensional data point following a predefined distribution function such as that of normal distributions. $D_{\theta _d}()$ denotes a discriminator function, controlled by parameters $\theta _d$, which aims to classify a sample into a binary space. $G_{\theta _g}()$ denotes a generator function, controlled by parameters $\theta _g$, which aims to generate a sample from some latent space vector. For example, $G_{\theta _g}(z)$ means using a latent vector $z$ to generate a synthetic/fake image, and $D_{\theta _d}(x)$ means to classify an image $x$ as binary output (i.e. true/false or 1/0). In the GAN setting, the discriminator $D_{\theta _d}()$ is learned to distinguish a genuine/true image (labeled as 1) from fake images (labeled as 0). Therefore, given a true image $x$, the ideal output from the discriminator $D_{\theta _d}(x)$ would be 1. Given a fake image generated from the generator $G_{\theta _g}(z)$, the ideal prediction from the discriminator $D_{\theta _d}(G_{\theta _g}(z))$ would be 0, indicating the sample is a fake image.
Following the above definition, the $\min \max $ objective function in Eq. (DISPLAY_FORM10) aims to learn parameters for the discriminator ($\theta _d$) and generator ($\theta _g$) to reach an optimization goal: The discriminator intends to differentiate true vs. fake images with maximum capability $\max _{\theta _d}$ whereas the generator intends to minimize the difference between a fake image vs. a true image $\min _{\theta _g}$. In other words, the discriminator sets the characteristics and the generator produces elements, often images, iteratively until it meets the attributes set forth by the discriminator. GANs are often used with images and other visual elements and are notoriously efficient in generating compelling and convincing photorealistic images. Most recently, GANs were used to generate an original painting in an unsupervised fashion BIBREF24. The following sections go into further detail regarding how the generator and discriminator are trained in GANs.
Generator - In image synthesis, the generator network can be thought of as a mapping from one representation space (latent space) to another (actual data) BIBREF21. When it comes to image synthesis, all of the images in the data space fall into some distribution in a very complex and high-dimensional feature space. Sampling from such a complex space is very difficult, so GANs instead train a generator to create synthetic images from a much more simple feature space (usually random noise) called the latent space. The generator network performs up-sampling of the latent space and is usually a deep neural network consisting of several convolutional and/or fully connected layers BIBREF21. The generator is trained using gradient descent to update the weights of the generator network with the aim of producing data (in our case, images) that the discriminator classifies as real.
Discriminator - The discriminator network can be thought of as a mapping from image data to the probability of the image coming from the real data space, and is also generally a deep neural network consisting of several convolution and/or fully connected layers. However, the discriminator performs down-sampling as opposed to up-sampling. Like the generator, it is trained using gradient descent but its goal is to update the weights so that it is more likely to correctly classify images as real or fake.
In GANs, the ideal outcome is for both the generator's and discriminator's cost functions to converge so that the generator produces photo-realistic images that are indistinguishable from real data, and the discriminator at the same time becomes an expert at differentiating between real and synthetic data. This, however, is not possible since a reduction in cost of one model generally leads to an increase in cost of the other. This phenomenon makes training GANs very difficult, and training them simultaneously (both models performing gradient descent in parallel) often leads to a stable orbit where neither model is able to converge. To combat this, the generator and discriminator are often trained independently. In this case, the GAN remains the same, but there are different training stages. In one stage, the weights of the generator are kept constant and gradient descent updates the weights of the discriminator, and in the other stage the weights of the discriminator are kept constant while gradient descent updates the weights of the generator. This is repeated for some number of epochs until a desired low cost for each model is reached BIBREF25.
<<</Generative Adversarial Neural Network>>>
<<<cGAN: Conditional GAN>>>
Conditional Generative Adversarial Networks (cGAN) are an enhancement of GANs proposed by BIBREF26 shortly after the introduction of GANs by BIBREF9. The objective function of the cGAN is defined in Eq. (DISPLAY_FORM13) which is very similar to the GAN objective function in Eq. (DISPLAY_FORM10) except that the inputs to both discriminator and generator are conditioned by a class label $y$.
The main technical innovation of cGAN is that it introduces an additional input or inputs to the original GAN model, allowing the model to be trained on information such as class labels or other conditioning variables as well as the samples themselves, concurrently. Whereas the original GAN was trained only with samples from the data distribution, resulting in the generated sample reflecting the general data distribution, cGAN enables directing the model to generate more tailored outputs.
In Figure FIGREF14, the condition vector is the class label (text string) "Red bird", which is fed to both the generator and discriminator. It is important, however, that the condition vector is related to the real data. If the model in Figure FIGREF14 was trained with the same set of real data (red birds) but the condition text was "Yellow fish", the generator would learn to create images of red birds when conditioned with the text "Yellow fish".
Note that the condition vector in cGAN can come in many forms, such as texts, not just limited to the class label. Such a unique design provides a direct solution to generate images conditioned by predefined specifications. As a result, cGAN has been used in text-to-image synthesis since the very first day of its invention although modern approaches can deliver much better text-to-image synthesis results.
black
<<</cGAN: Conditional GAN>>>
<<<Simple GAN Frameworks for Text-to-Image Synthesis>>>
In order to generate images from text, one simple solution is to employ the conditional GAN (cGAN) designs and add conditions to the training samples, such that the GAN is trained with respect to the underlying conditions. Several pioneer works have followed similar designs for text-to-image synthesis.
black An essential disadvantage of using cGAN for text-to-image synthesis is that that it cannot handle complicated textual descriptions for image generation, because cGAN uses labels as conditions to restrict the GAN inputs. If the text inputs have multiple keywords (or long text descriptions) they cannot be used simultaneously to restrict the input. Instead of using text as conditions, another two approaches BIBREF8, BIBREF16 use text as input features, and concatenate such features with other features to train discriminator and generator, as shown in Figure FIGREF15(b) and (c). To ensure text being used as GAN input, a feature embedding or feature representation learning BIBREF29, BIBREF30 function $\varphi ()$ is often introduced to convert input text as numeric features, which are further concatenated with other features to train GANs.
black
<<</Simple GAN Frameworks for Text-to-Image Synthesis>>>
<<<Advanced GAN Frameworks for Text-to-Image Synthesis>>>
Motivated by the GAN and conditional GAN (cGAN) design, many GAN based frameworks have been proposed to generate images, with different designs and architectures, such as using multiple discriminators, using progressively trained discriminators, or using hierarchical discriminators. Figure FIGREF17 outlines several advanced GAN frameworks in the literature. In addition to these frameworks, many news designs are being proposed to advance the field with rather sophisticated designs. For example, a recent work BIBREF37 proposes to use a pyramid generator and three independent discriminators, blackeach focusing on a different aspect of the images, to lead the generator towards creating images that are photo-realistic on multiple levels. Another recent publication BIBREF38 proposes to use discriminator to measure semantic relevance between image and text instead of class prediction (like most discriminator in GANs does), resulting a new GAN structure outperforming text conditioned auxiliary classifier (TAC-GAN) BIBREF16 and generating diverse, realistic, and relevant to the input text regardless of class.
black In the following section, we will first propose a taxonomy that summarizes advanced GAN frameworks for text-to-image synthesis, and review most recent proposed solutions to the challenge of generating photo-realistic images conditioned on natural language text descriptions using GANs. The solutions we discuss are selected based on relevance and quality of contributions. Many publications exist on the subject of image-generation using GANs, but in this paper we focus specifically on models for text-to-image synthesis, with the review emphasizing on the “model” and “contributions” for text-to-image synthesis. At the end of this section, we also briefly review methods using GANs for other image-synthesis applications.
black
<<</Advanced GAN Frameworks for Text-to-Image Synthesis>>>
<<</Preliminaries and Frameworks>>>
<<<Text-to-Image Synthesis Taxonomy and Categorization>>>
In this section, we propose a taxonomy to summarize advanced GAN based text-to-image synthesis frameworks, as shown in Figure FIGREF24. The taxonomy organizes GAN frameworks into four categories, including Semantic Enhancement GANs, Resolution Enhancement GANs, Diversity Enhancement GANs, and Motion Enhancement GAGs. Following the proposed taxonomy, each subsection will introduce several typical frameworks and address their techniques of using GANS to solve certain aspects of the text-to-mage synthesis challenges.
black
<<<GAN based Text-to-Image Synthesis Taxonomy>>>
Although the ultimate goal of Text-to-Image synthesis is to generate images closely related to the textual descriptions, the relevance of the images to the texts are often validated from different perspectives, due to the inherent diversity of human perceptions. For example, when generating images matching to the description “rose flowers”, some users many know the exact type of flowers they like and intend to generate rose flowers with similar colors. Other users, may seek to generate high quality rose flowers with a nice background (e.g. garden). The third group of users may be more interested in generating flowers similar to rose but with different colors and visual appearance, e.g. roses, begonia, and peony. The fourth group of users may want to not only generate flower images, but also use them to form a meaningful action, e.g. a video clip showing flower growth, performing a magic show using those flowers, or telling a love story using the flowers.
blackFrom the text-to-Image synthesis point of view, the first group of users intend to precisely control the semantic of the generated images, and their goal is to match the texts and images at the semantic level. The second group of users are more focused on the resolutions and the qualify of the images, in addition to the requirement that the images and texts are semantically related. For the third group of users, their goal is to diversify the output images, such that their images carry diversified visual appearances and are also semantically related. The fourth user group adds a new dimension in image synthesis, and aims to generate sequences of images which are coherent in temporal order, i.e. capture the motion information.
black Based on the above descriptions, we categorize GAN based Text-to-Image Synthesis into a taxonomy with four major categories, as shown in Fig. FIGREF24.
Semantic Enhancement GANs: Semantic enhancement GANs represent pioneer works of GAN frameworks for text-to-image synthesis. The main focus of the GAN frameworks is to ensure that the generated images are semantically related to the input texts. This objective is mainly achieved by using a neural network to encode texts as dense features, which are further fed to a second network to generate images matching to the texts.
Resolution Enhancement GANs: Resolution enhancement GANs mainly focus on generating high qualify images which are semantically matched to the texts. This is mainly achieved through a multi-stage GAN framework, where the outputs from earlier stage GANs are fed to the second (or later) stage GAN to generate better qualify images.
Diversity Enhancement GANs: Diversity enhancement GANs intend to diversify the output images, such that the generated images are not only semantically related but also have different types and visual appearance. This objective is mainly achieved through an additional component to estimate semantic relevance between generated images and texts, in order to maximize the output diversity.
Motion Enhancement GANs: Motion enhancement GANs intend to add a temporal dimension to the output images, such that they can form meaningful actions with respect to the text descriptions. This goal mainly achieved though a two-step process which first generates images matching to the “actions” of the texts, followed by a mapping or alignment procedure to ensure that images are coherent in the temporal order.
black In the following, we will introduce how these GAN frameworks evolve for text-to-image synthesis, and will also review some typical methods of each category.
black
<<</GAN based Text-to-Image Synthesis Taxonomy>>>
<<<Semantic Enhancement GANs>>>
Semantic relevance is one the of most important criteria of the text-to-image synthesis. For most GNAs discussed in this survey, they are required to generate images semantically related to the text descriptions. However, the semantic relevance is a rather subjective measure, and images are inherently rich in terms of its semantics and interpretations. Therefore, many GANs are further proposed to enhance the text-to-image synthesis from different perspectives. In this subsection, we will review several classical approaches which are commonly served as text-to-image synthesis baseline.
black
<<<DC-GAN>>>
Deep convolution generative adversarial network (DC-GAN) BIBREF8 represents the pioneer work for text-to-image synthesis using GANs. Its main goal is to train a deep convolutional generative adversarial network (DC-GAN) on text features. During this process these text features are encoded by another neural network. This neural network is a hybrid convolutional recurrent network at the character level. Concurrently, both neural networks have also feed-forward inference in the way they condition text features. Generating realistic images automatically from natural language text is the motivation of several of the works proposed in this computer vision field. However, actual artificial intelligence (AI) systems are far from achieving this task BIBREF8, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF22, BIBREF26. Lately, recurrent neural networks led the way to develop frameworks that learn discriminatively on text features. At the same time, generative adversarial networks (GANs) began recently to show some promise on generating compelling images of a whole host of elements including but not limited to faces, birds, flowers, and non-common images such as room interiorsBIBREF8. DC-GAN is a multimodal learning model that attempts to bridge together both of the above mentioned unsupervised machine learning algorithms, the recurrent neural networks (RNN) and generative adversarial networks (GANs), with the sole purpose of speeding the generation of text-to-image synthesis.
black Deep learning shed some light to some of the most sophisticated advances in natural language representation, image synthesis BIBREF7, BIBREF8, BIBREF43, BIBREF35, and classification of generic data BIBREF44. However, a bulk of the latest breakthroughs in deep learning and computer vision were related to supervised learning BIBREF8. Even though natural language and image synthesis were part of several contributions on the supervised side of deep learning, unsupervised learning saw recently a tremendous rise in input from the research community specially on two subproblems: text-based natural language and image synthesis BIBREF45, BIBREF14, BIBREF8, BIBREF46, BIBREF47. These subproblems are typically subdivided as focused research areas. DC-GAN's contributions are mainly driven by these two research areas. In order to generate plausible images from natural language, DC-GAN contributions revolve around developing a straightforward yet effective GAN architecture and training strategy that allows natural text to image synthesis. These contributions are primarily tested on the Caltech-UCSD Birds and Oxford-102 Flowers datasets. Each image in these datasets carry five text descriptions. These text descriptions were created by the research team when setting up the evaluation environment. The DC-GANs model is subsequently trained on several subcategories. Subcategories in this research represent the training and testing sub datasets. The performance shown by these experiments display a promising yet effective way to generate images from textual natural language descriptions BIBREF8.
black
<<</DC-GAN>>>
<<<DC-GAN Extensions>>>
Following the pioneer DC-GAN framework BIBREF8, many researches propose revised network structures (e.g. different discriminaotrs) in order to improve images with better semantic relevance to the texts. Based on the deep convolutional adversarial network (DC-GAN) network architecture, GAN-CLS with image-text matching discriminator, GAN-INT learned with text manifold interpolation and GAN-INT-CLS which combines both are proposed to find semantic match between text and image. Similar to the DC-GAN architecture, an adaptive loss function (i.e. Perceptual Loss BIBREF48) is proposed for semantic image synthesis which can synthesize a realistic image that not only matches the target text description but also keep the irrelavant features(e.g. background) from source images BIBREF49. Regarding to the Perceptual Losses, three loss functions (i.e. Pixel reconstruction loss, Activation reconstruction loss and Texture reconstruction loss) are proposed in BIBREF50 in which they construct the network architectures based on the DC-GAN, i.e. GAN-INT-CLS-Pixel, GAN-INT-CLS-VGG and GAN-INT-CLS-Gram with respect to three losses. In BIBREF49, a residual transformation unit is added in the network to retain similar structure of the source image.
black Following the BIBREF49 and considering the features in early layers address background while foreground is obtained in latter layers in CNN, a pair of discriminators with different architectures (i.e. Paired-D GAN) is proposed to synthesize background and foreground from a source image seperately BIBREF51. Meanwhile, the skip-connection in the generator is employed to more precisely retain background information in the source image.
black
<<</DC-GAN Extensions>>>
<<<MC-GAN>>>
When synthesising images, most text-to-image synthesis methods consider each output image as one single unit to characterize its semantic relevance to the texts. This is likely problematic because most images naturally consist of two crucial components: foreground and background. Without properly separating these two components, it's hard to characterize the semantics of an image if the whole image is treated as a single unit without proper separation.
black In order to enhance the semantic relevance of the images, a multi-conditional GAN (MC-GAN) BIBREF52 is proposed to synthesize a target image by combining the background of a source image and a text-described foreground object which does not exist in the source image. A unique feature of MC-GAN is that it proposes a synthesis block in which the background feature is extracted from the given image without non-linear function (i.e. only using convolution and batch normalization) and the foreground feature is the feature map from the previous layer.
black Because MC-GAN is able to properly model the background and foreground of the generated images, a unique strength of MC-GAN is that users are able to provide a base image and MC-GAN is able to preserve the background information of the base image to generate new images. black
<<</MC-GAN>>>
<<</Semantic Enhancement GANs>>>
<<<Resolution Enhancement GANs>>>
Due to the fact that training GANs will be much difficult when generating high-resolution images, a two stage GAN (i.e. stackGAN) is proposed in which rough images(i.e. low-resolution images) are generated in stage-I and refined in stage-II. To further improve the quality of generated images, the second version of StackGAN (i.e. Stack++) is proposed to use multi-stage GANs to generate multi-scale images. A color-consistency regularization term is also added into the loss to keep the consistency of images in different scales.
black While stackGAN and StackGAN++ are both built on the global sentence vector, AttnGAN is proposed to use attention mechanism (i.e. Deep Attentional Multimodal Similarity Model (DAMSM)) to model the multi-level information (i.e. word level and sentence level) into GANs. In the following, StackGAN, StackGAN++ and AttnGAN will be explained in detail.
black Recently, Dynamic Memory Generative Adversarial Network (i.e. DM-GAN)BIBREF53 which uses a dynamic memory component is proposed to focus on refiningthe initial generated image which is the key to the success of generating high quality images.
<<<StackGAN>>>
In 2017, Zhang et al. proposed a model for generating photo-realistic images from text descriptions called StackGAN (Stacked Generative Adversarial Network) BIBREF33. In their work, they define a two-stage model that uses two cascaded GANs, each corresponding to one of the stages. The stage I GAN takes a text description as input, converts the text description to a text embedding containing several conditioning variables, and generates a low-quality 64x64 image with rough shapes and colors based on the computed conditioning variables. The stage II GAN then takes this low-quality stage I image as well as the same text embedding and uses the conditioning variables to correct and add more detail to the stage I result. The output of stage II is a photorealistic 256$times$256 image that resembles the text description with compelling accuracy.
One major contribution of StackGAN is the use of cascaded GANs for text-to-image synthesis through a sketch-refinement process. By conditioning the stage II GAN on the image produced by the stage I GAN and text description, the stage II GAN is able to correct defects in the stage I output, resulting in high-quality 256x256 images. Prior works have utilized “stacked” GANs to separate the image generation process into structure and style BIBREF42, multiple stages each generating lower-level representations from higher-level representations of the previous stage BIBREF35, and multiple stages combined with a laplacian pyramid approach BIBREF54, which was introduced for image compression by P. Burt and E. Adelson in 1983 and uses the differences between consecutive down-samples of an original image to reconstruct the original image from its down-sampled version BIBREF55. However, these works did not use text descriptions to condition their generator models.
Conditioning Augmentation is the other major contribution of StackGAN. Prior works transformed the natural language text description into a fixed text embedding containing static conditioning variables which were fed to the generator BIBREF8. StackGAN does this and then creates a Gaussian distribution from the text embedding and randomly selects variables from the Gaussian distribution to add to the set of conditioning variables during training. This encourages robustness by introducing small variations to the original text embedding for a particular training image while keeping the training image that the generated output is compared to the same. The result is that the trained model produces more diverse images in the same distribution when using Conditioning Augmentation than the same model using a fixed text embedding BIBREF33.
<<</StackGAN>>>
<<<StackGAN++>>>
Proposed by the same users as StackGAN, StackGAN++ is also a stacked GAN model, but organizes the generators and discriminators in a “tree-like” structure BIBREF47 with multiple stages. The first stage combines a noise vector and conditioning variables (with Conditional Augmentation introduced in BIBREF33) for input to the first generator, which generates a low-resolution image, 64$\times $64 by default (this can be changed depending on the desired number of stages). Each following stage uses the result from the previous stage and the conditioning variables to produce gradually higher-resolution images. These stages do not use the noise vector again, as the creators assume that the randomness it introduces is already preserved in the output of the first stage. The final stage produces a 256$\times $256 high-quality image.
StackGAN++ introduces the joint conditional and unconditional approximation in their designs BIBREF47. The discriminators are trained to calculate the loss between the image produced by the generator and the conditioning variables (measuring how accurately the image represents the description) as well as the loss between the image and real images (probability of the image being real or fake). The generators then aim to minimize the sum of these losses, improving the final result.
<<</StackGAN++>>>
<<<AttnGAN>>>
Attentional Generative Adversarial Network (AttnGAN) BIBREF10 is very similar, in terms of its structure, to StackGAN++ BIBREF47, discussed in the previous section, but some novel components are added. Like previous works BIBREF56, BIBREF8, BIBREF33, BIBREF47, a text encoder generates a text embedding with conditioning variables based on the overall sentence. Additionally, the text encoder generates a separate text embedding with conditioning variables based on individual words. This process is optimized to produce meaningful variables using a bidirectional recurrent neural network (BRNN), more specifically bidirectional Long Short Term Memory (LSTM) BIBREF57, which, for each word in the description, generates conditions based on the previous word as well as the next word (bidirectional). The first stage of AttnGAN generates a low-resolution image based on the sentence-level text embedding and random noise vector. The output is fed along with the word-level text embedding to an “attention model”, which matches the word-level conditioning variables to regions of the stage I image, producing a word-context matrix. This is then fed to the next stage of the model along with the raw previous stage output. Each consecutive stage works in the same manner, but produces gradually higher-resolution images conditioned on the previous stage.
Two major contributions were introduced in AttnGAN: the attentional generative network and the Deep Attentional Multimodal Similarity Model (DAMSM) BIBREF47. The attentional generative network matches specific regions of each stage's output image to conditioning variables from the word-level text embedding. This is a very worthy contribution, allowing each consecutive stage to focus on specific regions of the image independently, adding “attentional” details region by region as opposed to the whole image. The DAMSM is also a key feature introduced by AttnGAN, which is used after the result of the final stage to calculate the similarity between the generated image and the text embedding at both the sentence level and the more fine-grained word level. Table TABREF48 shows scores from different metrics for StackGAN, StackGAN++, AttnGAN, and HDGAN on the CUB, Oxford, and COCO datasets. The table shows that AttnGAN outperforms the other models in terms of IS on the CUB dataset by a small amount and greatly outperforms them on the COCO dataset.
<<</AttnGAN>>>
<<<HDGAN>>>
Hierarchically-nested adversarial network (HDGAN) is a method proposed by BIBREF36, and its main objective is to tackle the difficult problem of dealing with photographic images from semantic text descriptions. These semantic text descriptions are applied on images from diverse datasets. This method introduces adversarial objectives nested inside hierarchically oriented networks BIBREF36. Hierarchical networks helps regularize mid-level manifestations. In addition to regularize mid-level manifestations, it assists the training of the generator in order to capture highly complex still media elements. These elements are captured in statistical order to train the generator based on settings extracted directly from the image. The latter is an ideal scenario. However, this paper aims to incorporate a single-stream architecture. This single-stream architecture functions as the generator that will form an optimum adaptability towards the jointed discriminators. Once jointed discriminators are setup in an optimum manner, the single-stream architecture will then advance generated images to achieve a much higher resolution BIBREF36.
The main contributions of the HDGANs include the introduction of a visual-semantic similarity measure BIBREF36. This feature will aid in the evaluation of the consistency of generated images. In addition to checking the consistency of generated images, one of the key objectives of this step is to test the logical consistency of the end product BIBREF36. The end product in this case would be images that are semantically mapped from text-based natural language descriptions to each area on the picture e.g. a wing on a bird or petal on a flower. Deep learning has created a multitude of opportunities and challenges for researchers in the computer vision AI field. Coupled with GAN and multimodal learning architectures, this field has seen tremendous growth BIBREF8, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF22, BIBREF26. Based on these advancements, HDGANs attempt to further extend some desirable and less common features when generating images from textual natural language BIBREF36. In other words, it takes sentences and treats them as a hierarchical structure. This has some positive and negative implications in most cases. For starters, it makes it more complex to generate compelling images. However, one of the key benefits of this elaborate process is the realism obtained once all processes are completed. In addition, one common feature added to this process is the ability to identify parts of sentences with bounding boxes. If a sentence includes common characteristics of a bird, it will surround the attributes of such bird with bounding boxes. In practice, this should happen if the desired image have other elements such as human faces (e.g. eyes, hair, etc), flowers (e.g. petal size, color, etc), or any other inanimate object (e.g. a table, a mug, etc). Finally, HDGANs evaluated some of its claims on common ideal text-to-image datasets such as CUB, COCO, and Oxford-102 BIBREF8, BIBREF36, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF22, BIBREF26. These datasets were first utilized on earlier works BIBREF8, and most of them sport modified features such image annotations, labels, or descriptions. The qualitative and quantitative results reported by researchers in this study were far superior of earlier works in this same field of computer vision AI.
black
<<</HDGAN>>>
<<</Resolution Enhancement GANs>>>
<<<Diversity Enhancement GANs>>>
In this subsection, we introduce text-to-image synthesis methods which try to maximize the diversity of the output images, based on the text descriptions.
black
<<<AC-GAN>>>
Two issues arise in the traditional GANs BIBREF58 for image synthesis: (1) scalabilirty problem: traditional GANs cannot predict a large number of image categories; and (2) diversity problem: images are often subject to one-to-many mapping, so one image could be labeled as different tags or being described using different texts. To address these problems, GAN conditioned on additional information, e.g. cGAN, is an alternative solution. However, although cGAN and many previously introduced approaches are able to generate images with respect to the text descriptions, they often output images with similar types and visual appearance.
black Slightly different from the cGAN, auxiliary classifier GANs (AC-GAN) BIBREF27 proposes to improve the diversity of output images by using an auxiliary classifier to control output images. The overall structure of AC-GAN is shown in Fig. FIGREF15(c). In AC-GAN, every generated image is associated with a class label, in addition to the true/fake label which are commonly used in GAN or cGAN. The discriminator of AC-GAN not only outputs a probability distribution over sources (i.e. whether the image is true or fake), it also output a probability distribution over the class label (i.e. predict which class the image belong to).
black By using an auxiliary classifier layer to predict the class of the image, AC-GAN is able to use the predicted class labels of the images to ensure that the output consists of images from different classes, resulting in diversified synthesis images. The results show that AC-GAN can generate images with high diversity.
black
<<</AC-GAN>>>
<<<TAC-GAN>>>
Building on the AC-GAN, TAC-GAN BIBREF59 is proposed to replace the class information with textual descriptions as the input to perform the task of text to image synthesis. The architecture of TAC-GAN is shown in Fig. FIGREF15(d), which is similar to AC-GAN. Overall, the major difference between TAC-GAN and AC-GAN is that TAC-GAN conditions the generated images on text descriptions instead of on a class label. This design makes TAC-GAN more generic for image synthesis.
black For TAC-GAN, it imposes restrictions on generated images in both texts and class labels. The input vector of TAC-GAN's generative network is built based on a noise vector and embedded vector representation of textual descriptions. The discriminator of TAC-GAN is similar to that of the AC-GAN, which not only predicts whether the image is fake or not, but also predicts the label of the images. A minor difference of TAC-GAN's discriminator, compared to that of the AC-GAN, is that it also receives text information as input before performing its classification.
black The experiments and validations, on the Oxford-102 flowers dataset, show that the results produced by TAC-GAN are “slightly better” that other approaches, including GAN-INT-CLS and StackGAN.
black
<<</TAC-GAN>>>
<<<Text-SeGAN>>>
In order to improve the diversity of the output images, both AC-GAN and TAC-GAN's discriminators predict class labels of the synthesised images. This process likely enforces the semantic diversity of the images, but class labels are inherently restrictive in describing image semantics, and images described by text can be matched to multiple labels. Therefore, instead of predicting images' class labels, an alternative solution is to directly quantify their semantic relevance.
black The architecture of Text-SeGAN is shown in Fig. FIGREF15(e). In order to directly quantify semantic relevance, Text-SeGAN BIBREF28 adds a regression layer to estimate the semantic relevance between the image and text instead of a classifier layer of predicting labels. The estimated semantic reference is a fractional value ranging between 0 and 1, with a higher value reflecting better semantic relevance between the image and text. Due to this unique design, an inherent advantage of Text-SeGAN is that the generated images are not limited to certain classes and are semantically matching to the text input.
black Experiments and validations, on Oxford-102 flower dataset, show that Text-SeGAN can generate diverse images that are semantically relevant to the input text. In addition, the results of Text-SeGAN show improved inception score compared to other approaches, including GAN-INT-CLS, StackGAN, TAC-GAN, and HDGAN.
black
<<</Text-SeGAN>>>
<<<MirrorGAN and Scene Graph GAN>>>
Due to the inherent complexity of the visual images, and the diversity of text descriptions (i.e. same words could imply different meanings), it is difficulty to precisely match the texts to the visual images at the semantic levels. For most methods we have discussed so far, they employ a direct text to image generation process, but there is no validation about how generated images comply with the text in a reverse fashion.
black To ensure the semantic consistency and diversity, MirrorGAN BIBREF60 employs a mirror structure, which reversely learns from generated images to output texts (an image-to-text process) to further validate whether generated are indeed consistent to the input texts. MirrowGAN includes three modules: a semantic text embedding module (STEM), a global-local collaborative attentive module for cascaded image generation (GLAM), and a semantic text regeneration and alignment module (STREAM). The back to back Text-to-Image (T2I) and Image-to-Text (I2T) are combined to progressively enhance the diversity and semantic consistency of the generated images.
black In order to enhance the diversity of the output image, Scene Graph GAN BIBREF61 proposes to use visual scene graphs to describe the layout of the objects, allowing users to precisely specific the relationships between objects in the images. In order to convert the visual scene graph as input for GAN to generate images, this method uses graph convolution to process input graphs. It computes a scene layout by predicting bounding boxes and segmentation masks for objects. After that, it converts the computed layout to an image with a cascaded refinement network.
black
<<</MirrorGAN and Scene Graph GAN>>>
<<</Diversity Enhancement GANs>>>
<<<Motion Enhancement GANs>>>
Instead of focusing on generating static images, another line of text-to-image synthesis research focuses on generating videos (i.e. sequences of images) from texts. In this context, the synthesised videos are often useful resources for automated assistance or story telling.
black
<<<ObamaNet and T2S>>>
One early/interesting work of motion enhancement GANs is to generate spoofed speech and lip-sync videos (or talking face) of Barack Obama (i.e. ObamaNet) based on text input BIBREF62. This framework is consisted of three parts, i.e. text to speech using “Char2Wav”, mouth shape representation synced to the audio using a time-delayed LSTM and “video generation” conditioned on the mouth shape using “U-Net” architecture. Although the results seem promising, ObamaNet only models the mouth region and the videos are not generated from noise which can be regarded as video prediction other than video generation.
black Another meaningful trial of using synthesised videos for automated assistance is to translate spoken language (e.g. text) into sign language video sequences (i.e. T2S) BIBREF63. This is often achieved through a two step process: converting texts as meaningful units to generate images, followed by a learning component to arrange images into sequential order for best representation. More specifically, using RNN based machine translation methods, texts are translated into sign language gloss sequences. Then, glosses are mapped to skeletal pose sequences using a lookup-table. To generate videos, a conditional DCGAN with the input of concatenation of latent representation of the image for a base pose and skeletal pose information is built.
black
<<</ObamaNet and T2S>>>
<<<T2V>>>
In BIBREF64, a text-to-video model (T2V) is proposed based on the cGAN in which the input is the isometric Gaussian noise with the text-gist vector served as the generator. A key component of generating videos from text is to train a conditional generative model to extract both static and dynamic information from text, followed by a hybrid framework combining a Variational Autoencoder (VAE) and a Generative Adversarial Network (GAN).
black More specifically, T2V relies on two types of features, static features and dynamic features, to generate videos. Static features, called “gist” are used to sketch text-conditioned background color and object layout structure. Dynamic features, on the other hand, are considered by transforming input text into an image filter which eventually forms the video generator which consists of three entangled neural networks. The text-gist vector is generated by a gist generator which maintains static information (e.g. background) and a text2filter which captures the dynamic information (i.e. actions) in the text to generate videos.
black As demonstrated in the paper BIBREF64, the generated videos are semantically related to the texts, but have a rather low quality (e.g. only $64 \times 64$ resolution).
black
<<</T2V>>>
<<<StoryGAN>>>
Different from T2V which generates videos from a single text, StoryGAN aims to produce dynamic scenes consistent of specified texts (i.e. story written in a multi-sentence paragraph) using a sequential GAN model BIBREF65. Story encoder, context encoder, and discriminators are the main components of this model. By using stochastic sampling, the story encoder intends to learn an low-dimensional embedding vector for the whole story to keep the continuity of the story. The context encoder is proposed to capture contextual information during sequential image generation based on a deep RNN. Two discriminators of StoryGAN are image discriminator which evaluates the generated images and story discriminator which ensures the global consistency.
black The experiments and comparisons, on CLEVR dataset and Pororo cartoon dataset which are originally used for visual question answering, show that StoryGAN improves the generated video qualify in terms of Structural Similarity Index (SSIM), visual qualify, consistence, and relevance (the last three measure are based on human evaluation).
<<</StoryGAN>>>
<<</Motion Enhancement GANs>>>
<<</Text-to-Image Synthesis Taxonomy and Categorization>>>
<<<GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons>>>
<<<Text-to-image Synthesis Applications>>>
Computer vision applications have strong potential for industries including but not limited to the medical, government, military, entertainment, and online social media fields BIBREF7, BIBREF66, BIBREF67, BIBREF68, BIBREF69, BIBREF70. Text-to-image synthesis is one such application in computer vision AI that has become the main focus in recent years due to its potential for providing beneficial properties and opportunities for a wide range of applicable areas.
Text-to-image synthesis is an application byproduct of deep convolutional decoder networks in combination with GANs BIBREF7, BIBREF8, BIBREF10. Deep convolutional networks have contributed to several breakthroughs in image, video, speech, and audio processing. This learning method intends, among other possibilities, to help translate sequential text descriptions to images supplemented by one or many additional methods. Algorithms and methods developed in the computer vision field have allowed researchers in recent years to create realistic images from plain sentences. Advances in the computer vision, deep convolutional nets, and semantic units have shined light and redirected focus to this research area of text-to-image synthesis, having as its prime directive: to aid in the generation of compelling images with as much fidelity to text descriptions as possible.
To date, models for generating synthetic images from textual natural language in research laboratories at universities and private companies have yielded compelling images of flowers and birds BIBREF8. Though flowers and birds are the most common objects studied thus far, research has been applied to other classes as well. For example, there have been studies focused solely on human faces BIBREF7, BIBREF8, BIBREF71, BIBREF72.
It’s a fascinating time for computer vision AI and deep learning researchers and enthusiasts. The consistent advancement in hardware, software, and contemporaneous development of computer vision AI research disrupts multiple industries. These advances in technology allow for the extraction of several data types from a variety of sources. For example, image data captured from a variety of photo-ready devices, such as smart-phones, and online social media services opened the door to the analysis of large amounts of media datasets BIBREF70. The availability of large media datasets allow new frameworks and algorithms to be proposed and tested on real-world data.
<<</Text-to-image Synthesis Applications>>>
<<<Text-to-image Synthesis Benchmark Datasets>>>
A summary of some reviewed methods and benchmark datasets used for validation is reported in Table TABREF43. In addition, the performance of different GANs with respect to the benchmark datasets and performance metrics is reported in Table TABREF48.
In order to synthesize images from text descriptions, many frameworks have taken a minimalistic approach by creating small and background-less images BIBREF73. In most cases, the experiments were conducted on simple datasets, initially containing images of birds and flowers. BIBREF8 contributed to these data sets by adding corresponding natural language text descriptions to subsets of the CUB, MSCOCO, and Oxford-102 datasets, which facilitated the work on text-to-image synthesis for several papers released more recently.
While most deep learning algorithms use MNIST BIBREF74 dataset as the benchmark, there are three main datasets that are commonly used for evaluation of proposed GAN models for text-to-image synthesis: CUB BIBREF75, Oxford BIBREF76, COCO BIBREF77, and CIFAR-10 BIBREF78. CUB BIBREF75 contains 200 birds with matching text descriptions and Oxford BIBREF76 contains 102 categories of flowers with 40-258 images each and matching text descriptions. These datasets contain individual objects, with the text description corresponding to that object, making them relatively simple. COCO BIBREF77 is much more complex, containing 328k images with 91 different object types. CIFAI-10 BIBREF78 dataset consists of 60000 32$times$32 colour images in 10 classes, with 6000 images per class. In contrast to CUB and Oxford, whose images each contain an individual object, COCO’s images may contain multiple objects, each with a label, so there are many labels per image. The total number of labels over the 328k images is 2.5 million BIBREF77.
<<</Text-to-image Synthesis Benchmark Datasets>>>
<<<Text-to-image Synthesis Benchmark Evaluation Metrics>>>
Several evaluation metrics are used for judging the images produced by text-to-image GANs. Proposed by BIBREF25, Inception Scores (IS) calculates the entropy (randomness) of the conditional distribution, obtained by applying the Inception Model introduced in BIBREF79, and marginal distribution of a large set of generated images, which should be low and high, respectively, for meaningful images. Low entropy of conditional distribution means that the evaluator is confident that the images came from the data distribution, and high entropy of the marginal distribution means that the set of generated images is diverse, which are both desired features. The IS score is then computed as the KL-divergence between the two entropies. FCN-scores BIBREF2 are computed in a similar manner, relying on the intuition that realistic images generated by a GAN should be able to be classified correctly by a classifier trained on real images of the same distribution. Therefore, if the FCN classifier classifies a set of synthetic images accurately, the image is probably realistic, and the corresponding GAN gets a high FCN score. Frechet Inception Distance (FID) BIBREF80 is the other commonly used evaluation metric, and takes a different approach, actually comparing the generated images to real images in the distribution. A high FID means there is little relationship between statistics of the synthetic and real images and vice versa, so lower FIDs are better.
black The performance of different GANs with respect to the benchmark datasets and performance metrics is reported in Table TABREF48. In addition, Figure FIGREF49 further lists the performance of 14 GANs with respect to their Inception Scores (IS).
<<</Text-to-image Synthesis Benchmark Evaluation Metrics>>>
<<<GAN Based Text-to-image Synthesis Results Comparison>>>
While we gathered all the data we could find on scores for each model on the CUB, Oxford, and COCO datasets using IS, FID, FCN, and human classifiers, we unfortunately were unable to find certain data for AttnGAN and HDGAN (missing in Table TABREF48). The best evaluation we can give for those with missing data is our own opinions by looking at examples of generated images provided in their papers. In this regard, we observed that HDGAN produced relatively better visual results on the CUB and Oxford datasets while AttnGAN produced far more impressive results than the rest on the more complex COCO dataset. This is evidence that the attentional model and DAMSM introduced by AttnGAN are very effective in producing high-quality images. Examples of the best results of birds and plates of vegetables generated by each model are presented in Figures FIGREF50 and FIGREF51, respectively.
blackIn terms of inception score (IS), which is the metric that was applied to majority models except DC-GAN, the results in Table TABREF48 show that StackGAN++ only showed slight improvement over its predecessor, StackGAN, for text-to-image synthesis. However, StackGAN++ did introduce a very worthy enhancement for unconditional image generation by organizing the generators and discriminators in a “tree-like” structure. This indicates that revising the structures of the discriminators and/or generators can bring a moderate level of improvement in text-to-image synthesis.
blackIn addition, the results in Table TABREF48 also show that DM-GAN BIBREF53 has the best performance, followed by Obj-GAN BIBREF81. Notice that both DM-GAN and Obj-GAN are most recently developed methods in the field (both published in 2019), indicating that research in text to image synthesis is continuously improving the results for better visual perception and interception. Technical wise, DM-GAN BIBREF53 is a model using dynamic memory to refine fuzzy image contents initially generated from the GAN networks. A memory writing gate is used for DM-GAN to select important text information and generate images based on he selected text accordingly. On the other hand, Obj-GAN BIBREF81 focuses on object centered text-to-image synthesis. The proposed framework of Obj-GAN consists of a layout generation, including a bounding box generator and a shape generator, and an object-driven attentive image generator. The designs and advancement of DM-GAN and Obj-GAN indicate that research in text-to-image synthesis is advancing to put more emphasis on the image details and text semantics for better understanding and perception.
<<</GAN Based Text-to-image Synthesis Results Comparison>>>
<<<Notable Mentions>>>
It is worth noting that although this survey mainly focuses on text-to-image synthesis, there have been other applications of GANs in broader image synthesis field that we found fascinating and worth dedicating a small section to. For example, BIBREF72 used Sem-Latent GANs to generate images of faces based on facial attributes, producing impressive results that, at a glance, could be mistaken for real faces. BIBREF82, BIBREF70, and BIBREF83 demonstrated great success in generating text descriptions from images (image captioning) with great accuracy, with BIBREF82 using an attention-based model that automatically learns to focus on salient objects and BIBREF83 using deep visual-semantic alignments. Finally, there is a contribution made by StackGAN++ that was not mentioned in the dedicated section due to its relation to unconditional image generation as opposed to conditional, namely a color-regularization term BIBREF47. This additional term aims to keep the samples generated from the same input at different stages more consistent in color, which resulted in significantly better results for the unconditional model.
<<</Notable Mentions>>>
<<</GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons>>>
<<<Conclusion>>>
The recent advancement in text-to-image synthesis research opens the door to several compelling methods and architectures. The main objective of text-to-image synthesis initially was to create images from simple labels, and this objective later scaled to natural languages. In this paper, we reviewed novel methods that generate, in our opinion, the most visually-rich and photo-realistic images, from text-based natural language. These generated images often rely on generative adversarial networks (GANs), deep convolutional decoder networks, and multimodal learning methods.
blackIn the paper, we first proposed a taxonomy to organize GAN based text-to-image synthesis frameworks into four major groups: semantic enhancement GANs, resolution enhancement GANs, diversity enhancement GANs, and motion enhancement GANs. The taxonomy provides a clear roadmap to show the motivations, architectures, and difference of different methods, and also outlines their evolution timeline and relationships. Following the proposed taxonomy, we reviewed important features of each method and their architectures. We indicated the model definition and key contributions from some advanced GAN framworks, including StackGAN, StackGAN++, AttnGAN, DC-GAN, AC-GAN, TAC-GAN, HDGAN, Text-SeGAn, StoryGAN etc. Many of the solutions surveyed in this paper tackled the highly complex challenge of generating photo-realistic images beyond swatch size samples. In other words, beyond the work of BIBREF8 in which images were generated from text in 64$\times $64 tiny swatches. Lastly, all methods were evaluated on datasets that included birds, flowers, humans, and other miscellaneous elements. We were also able to allocate some important papers that were as impressive as the papers we finally surveyed. Though, these notable papers have yet to contribute directly or indirectly to the expansion of the vast computer vision AI field. Looking into the future, an excellent extension from the works surveyed in this paper would be to give more independence to the several learning methods (e.g. less human intervention) involved in the studies as well as increasing the size of the output images.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nblackTraditional Learning Based Text-to-image Synthesis\nGAN Based Text-to-image Synthesis\nRelated Work\nPreliminaries and Frameworks\nGenerative Adversarial Neural Network\ncGAN: Conditional GAN\nSimple GAN Frameworks for Text-to-Image Synthesis\nAdvanced GAN Frameworks for Text-to-Image Synthesis\nText-to-Image Synthesis Taxonomy and Categorization\nGAN based Text-to-Image Synthesis Taxonomy\nSemantic Enhancement GANs\nDC-GAN\nDC-GAN Extensions\nMC-GAN\nResolution Enhancement GANs\nStackGAN\nStackGAN++\nAttnGAN\nHDGAN\nDiversity Enhancement GANs\nAC-GAN\nTAC-GAN\nText-SeGAN\nMirrorGAN and Scene Graph GAN\nMotion Enhancement GANs\nObamaNet and T2S\nT2V\nStoryGAN\nGAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons\nText-to-image Synthesis Applications\nText-to-image Synthesis Benchmark Datasets\nText-to-image Synthesis Benchmark Evaluation Metrics\nGAN Based Text-to-image Synthesis Results Comparison\nNotable Mentions\nConclusion"
],
"type": "outline"
}
|
1910.04601
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
RC-QED: Evaluating Natural Language Derivations in Multi-Hop Reading Comprehension
<<<Abstract>>>
Recent studies revealed that reading comprehension (RC) systems learn to exploit annotation artifacts and other biases in current datasets. This allows systems to "cheat" by employing simple heuristics to answer questions, e.g. by relying on semantic type consistency. This means that current datasets are not well-suited to evaluate RC systems. To address this issue, we introduce RC-QED, a new RC task that requires giving not only the correct answer to a question, but also the reasoning employed for arriving at this answer. For this, we release a large benchmark dataset consisting of 12,000 answers and corresponding reasoning in form of natural language derivations. Experiments show that our benchmark is robust to simple heuristics and challenging for state-of-the-art neural path ranking approaches.
<<</Abstract>>>
<<<Introduction>>>
Reading comprehension (RC) has become a key benchmark for natural language understanding (NLU) systems and a large number of datasets are now available BIBREF0, BIBREF1, BIBREF2. However, these datasets suffer from annotation artifacts and other biases, which allow systems to “cheat”: Instead of learning to read texts, systems learn to exploit these biases and find answers via simple heuristics, such as looking for an entity with a matching semantic type BIBREF3, BIBREF4. To give another example, many RC datasets contain a large number of “easy” problems that can be solved by looking at the first few words of the question Sugawara2018. In order to provide a reliable measure of progress, an RC dataset thus needs to be robust to such simple heuristics.
Towards this goal, two important directions have been investigated. One direction is to improve the dataset itself, for example, so that it requires an RC system to perform multi-hop inferences BIBREF0 or to generate answers BIBREF1. Another direction is to request a system to output additional information about answers. Yang2018HotpotQA:Answering propose HotpotQA, an “explainable” multi-hop Question Answering (QA) task that requires a system to identify a set of sentences containing supporting evidence for the given answer. We follow the footsteps of Yang2018HotpotQA:Answering and explore an explainable multi-hop QA task.
In the community, two important types of explanations have been explored so far BIBREF5: (i) introspective explanation (how a decision is made), and (ii) justification explanation (collections of evidences to support the decision). In this sense, supporting facts in HotpotQA can be categorized as justification explanations. The advantage of using justification explanations as benchmark is that the task can be reduced to a standard classification task, which enables us to adopt standard evaluation metrics (e.g. a classification accuracy). However, this task setting does not evaluate a machine's ability to (i) extract relevant information from justification sentences and (ii) synthesize them to form coherent logical reasoning steps, which are equally important for NLU.
To address this issue, we propose RC-QED, an RC task that requires not only the answer to a question, but also an introspective explanation in the form of a natural language derivation (NLD). For example, given the question “Which record company released the song Barracuda?” and supporting documents shown in Figure FIGREF1, a system needs to give the answer “Portrait Records” and to provide the following NLD: 1.) Barracuda is on Little Queen, and 2.) Little Queen was released by Portrait Records.
The main difference between our work and HotpotQA is that they identify a set of sentences $\lbrace s_2,s_4\rbrace $, while RC-QED requires a system to generate its derivations in a correct order. This generation task enables us to measure a machine's logical reasoning ability mentioned above. Due to its subjective nature of the natural language derivation task, we evaluate the correctness of derivations generated by a system with multiple reference answers. Our contributions can be summarized as follows:
We create a large corpus consisting of 12,000 QA pairs and natural language derivations. The developed crowdsourcing annotation framework can be used for annotating other QA datasets with derivations.
Through an experiment using two baseline models, we highlight several challenges of RC-QED.
We will make the corpus of reasoning annotations and the baseline system publicly available at https://naoya-i.github.io/rc-qed/.
<<</Introduction>>>
<<<Task formulation: RC-QED>>>
<<<Input, output, and evaluation metrics>>>
We formally define RC-QED as follows:
Given: (i) a question $Q$, and (ii) a set $S$ of supporting documents relevant to $Q$;
Find: (i) answerability $s \in \lbrace \textsf {Answerable},$ $\textsf {Unanswerable} \rbrace $, (ii) an answer $a$, and (iii) a sequence $R$ of derivation steps.
We evaluate each prediction with the following evaluation metrics:
Answerability: Correctness of model's decision on answerability (i.e. binary classification task) evaluated by Precision/Recall/F1.
Answer precision: Correctness of predicted answers (for Answerable predictions only). We follow the standard practice of RC community for evaluation (e.g. an accuracy in the case of multiple choice QA).
Derivation precision: Correctness of generated NLDs evaluated by ROUGE-L BIBREF6 (RG-L) and BLEU-4 (BL-4) BIBREF7. We follow the standard practice of evaluation for natural language generation BIBREF1. Derivation steps might be subjective, so we resort to multiple reference answers.
<<</Input, output, and evaluation metrics>>>
<<<RC-QED@!START@$^{\rm E}$@!END@>>>
This paper instantiates RC-QED by employing multiple choice, entity-based multi-hop QA BIBREF0 as a testbed (henceforth, RC-QED$^{\rm E}$). In entity-based multi-hop QA, machines need to combine relational facts between entities to derive an answer. For example, in Figure FIGREF1, understanding the facts about Barracuda, Little Queen, and Portrait Records stated in each article is required. This design choice restricts a problem domain, but it provides interesting challenges as discussed in Section SECREF46. In addition, such entity-based chaining is known to account for the majority of reasoning types required for multi-hop reasoning BIBREF2.
More formally, given (i) a question $Q=(r, q)$ represented by a binary relation $r$ and an entity $q$ (question entity), (ii) relevant articles $S$, and (iii) a set $C$ of candidate entities, systems are required to output (i) an answerability $s \in \lbrace \textsf {Answerable}, \textsf {Unanswerable} \rbrace $, (ii) an entity $e \in C$ (answer entity) that $(q, r, e)$ holds, and (iii) a sequence $R$ of derivation steps as to why $e$ is believed to be an answer. We define derivation steps as an $m$ chain of relational facts to derive an answer, i.e. $(q, r_1, e_1), (e_1, r_2, e_2), ..., (e_{m-1}, r_{m-1}, e_m),$ $(e_m, r_m, e_{m+1}))$. Although we restrict the form of knowledge to entity relations, we use a natural language form to represent $r_i$ rather than a closed vocabulary (see Figure FIGREF1 for an example).
<<</RC-QED@!START@$^{\rm E}$@!END@>>>
<<</Task formulation: RC-QED>>>
<<<Data collection for RC-QED@!START@$^{\rm E}$@!END@>>>
<<<Crowdsourcing interface>>>
To acquire a large-scale corpus of NLDs, we use crowdsourcing (CS). Although CS is a powerful tool for large-scale dataset creation BIBREF2, BIBREF8, quality control for complex tasks is still challenging. We thus carefully design an incentive structure for crowdworkers, following Yang2018HotpotQA:Answering.
Initially, we provide crowdworkers with an instruction with example annotations, where we emphasize that they judge the truth of statements solely based on given articles, not based on their own knowledge.
<<<Judgement task (Figure @!START@UID13@!END@).>>>
Given a statement and articles, workers are asked to judge whether the statement can be derived from the articles at three grades: True, Likely (i.e. Answerable), or Unsure (i.e. Unanswerable). If a worker selects Unsure, we ask workers to tell us why they are unsure from two choices (“Not stated in the article” or “Other”).
<<</Judgement task (Figure @!START@UID13@!END@).>>>
<<<Derivation task (Figure @!START@UID14@!END@).>>>
If a worker selects True or Likely in the judgement task, we first ask which sentences in the given articles are justification explanations for a given statement, similarly to HotpotQA BIBREF2. The “summary” text boxes (i.e. NLDs) are then initialized with these selected sentences. We give a ¢6 bonus to those workers who select True or Likely. To encourage an abstraction of selected sentences, we also introduce a gamification scheme to give a bonus to those who provide shorter NLDs. Specifically, we probabilistically give another ¢14 bonus to workers according to a score they gain. The score is always shown on top of the screen, and changes according to the length of NLDs they write in real time. To discourage noisy annotations, we also warn crowdworkers that their work would be rejected for noisy submissions. We periodically run simple filtering to exclude noisy crowdworkers (e.g. workers who give more than 50 submissions with the same answers).
We deployed the task on Amazon Mechanical Turk (AMT). To see how reasoning varies across workers, we hire 3 crowdworkers per one instance. We hire reliable crowdworkers with $\ge 5,000$ HITs experiences and an approval rate of $\ge $ 99.0%, and pay ¢20 as a reward per instance.
Our data collection pipeline is expected to be applicable to other types of QAs other than entity-based multi-hop QA without any significant extensions, because the interface is not specifically designed for entity-centric reasoning.
<<</Derivation task (Figure @!START@UID14@!END@).>>>
<<</Crowdsourcing interface>>>
<<<Dataset>>>
Our study uses WikiHop BIBREF0, as it is an entity-based multi-hop QA dataset and has been actively used. We randomly sampled 10,000 instances from 43,738 training instances and 2,000 instances from 5,129 validation instances (i.e. 36,000 annotation tasks were published on AMT). We manually converted structured WikiHop question-answer pairs (e.g. locatedIn(Macchu Picchu, Peru)) into natural language statements (Macchu Picchu is located in Peru) using a simple conversion dictionary.
We use supporting documents provided by WikiHop. WikiHop collects supporting documents by finding Wikipedia articles that bridges a question entity $e_i$ and an answer entity $e_j$, where the link between articles is given by a hyperlink.
<<</Dataset>>>
<<<Results>>>
Table TABREF17 shows the statistics of responses and example annotations. Table TABREF17 also shows the abstractiveness of annotated NLDs ($a$), namely the number of tokens in an NLD divided by the number of tokens in its corresponding justification sentences. This indicates that annotated NLDs are indeed summarized. See Table TABREF53 in Appendix and Supplementary Material for more results.
<<<Quality>>>
To evaluate the quality of annotation results, we publish another CS task on AMT. We randomly sample 300 True and Likely responses in this evaluation. Given NLDs and a statement, 3 crowdworkers are asked if the NLDs can lead to the statement at four scale levels. If the answer is 4 or 3 (“yes” or “likely”), we additionally asked whether each derivation step can be derived from each supporting document; otherwise we asked them the reasons. For a fair evaluation, we encourage crowdworkers to annotate given NLDs with a lower score by stating that we give a bonus if they found a flaw of reasoning on the CS interface.
The evaluation results shown in Table TABREF24 indicate that the annotated NLDs are of high quality (Reachability), and each NLD is properly derived from supporting documents (Derivability).
On the other hand, we found the quality of 3-step NLDs is relatively lower than the others. Crowdworkers found that 45.3% of 294 (out of 900) 3-step NLDs has missing steps to derive a statement. Let us consider this example: for annotated NLDs “[1] Kouvola is located in Helsinki. [2] Helsinki is in the region of Uusimaa. [3] Uusimaa borders the regions Southwest Finland, Kymenlaakso and some others.” and for the statement “Kouvola is located in Kymenlaakso”, one worker pointed out the missing step “Uusimaa is in Kymenlaakso.”. We speculate that greater steps of reasoning make it difficult for crowdworkers to check the correctness of derivations during the writing task.
<<</Quality>>>
<<<Agreement>>>
For agreement on the number of NLDs, we obtained a Krippendorff's $\alpha $ of 0.223, indicating a fair agreement BIBREF9.
Our manual inspection of the 10 worst disagreements revealed that majority (7/10) come from Unsure v.s. non-Unsure. It also revealed that crowdworkers who labeled non-Unsure are reliable—6 out 7 non-Unsure annotations can be judged as correct. This partially confirms the effectiveness of our incentive structure.
<<</Agreement>>>
<<</Results>>>
<<</Data collection for RC-QED@!START@$^{\rm E}$@!END@>>>
<<<Baseline RC-QED@!START@$^{\rm E}$@!END@ model>>>
To highlight the challenges and nature of RC-QED$^{\rm E}$, we create a simple, transparent, and interpretable baseline model.
Recent studies on knowledge graph completion (KGC) explore compositional inferences to combat with the sparsity of knowledge bases BIBREF10, BIBREF11, BIBREF12. Given a query triplet $(h, r, t)$ (e.g. (Macchu Picchu, locatedIn, Peru)), a path ranking-based approach for KGC explicitly samples paths between $h$ and $t$ in a knowledge base (e.g. Macchu Picchu—locatedIn—Andes Mountain—countryOf—Peru), and construct a feature vector of these paths. This feature vector is then used to calculate the compatibility between the query triplet and the sampled paths.
RC-QED$^{\rm E}$ can be naturally solved by path ranking-based KGC (PRKGC), where the query triplet and the sampled paths correspond to a question and derivation steps, respectively. PRKGC meets our purposes because of its glassboxness: we can trace the derivation steps of the model easily.
<<<Knowledge graph construction>>>
Given supporting documents $S$, we build a knowledge graph. We first apply a coreference resolver to $S$ and then create a directed graph $G(S)$. Therein, each node represents named entities (NEs) in $S$, and each edge represents textual relations between NEs extracted from $S$. Figure FIGREF27 illustrates an example of $G(S)$ constructed from supporting documents in Figure FIGREF1.
<<</Knowledge graph construction>>>
<<<Path ranking-based KGC (PRKGC)>>>
Given a question $Q=(q, r)$ and a candidate entity $c_i$, we estimate the plausibility of $(q, r, c_i)$ as follows:
where $\sigma $ is a sigmoid function, and $\mathbf {q, r, c_i}, \mathbf {\pi }(q, c_i)$ are vector representations of $q, r, c_i$ and a set $\pi (q, c_i)$ of shortest paths between $q$ and $c_i$ on $G(S)$. ${\rm MLP}(\cdot , \cdot )$ denotes a multi-layer perceptron. To encode entities into vectors $\mathbf {q, c_i}$, we use Long-Short Term Memory (LSTM) and take its last hidden state. For example, in Figure FIGREF27, $q =$ Barracuda and $c_i =$ Portrait Records yield $\pi (q, c_i) = \lbrace $Barracuda—is the most popular in their album—Little Queen—was released in May 1977 on—Portrait Records, Barracuda—was released from American band Heart—is the second album released by:-1—Little Queen—was released in May 1977 on—Portrait Records$\rbrace $.
To obtain path representations $\mathbf {\pi }(q, c_i)$, we attentively aggregate individual path representations: $\mathbf {\pi }(q, c_i) = \sum _j \alpha _j \mathbf {\pi _j}(q, c_i)$, where $\alpha _j$ is an attention for the $j$-th path. The attention values are calculated as follows: $\alpha _j = \exp ({\rm sc}(q, r, c_i, \pi _j)) / \sum _k \exp ({\rm sc}(q, r, c_i, \pi _k))$, where ${\rm sc}(q, r, c_i, \pi _j) = {\rm MLP}(\mathbf {q}, \mathbf {r}, \mathbf {c_i}, \mathbf {\pi _j})$. To obtain individual path representations $\mathbf {\pi _j}$, we follow toutanova-etal-2015-representing. We use a Bi-LSTM BIBREF13 with mean pooling over timestep in order to encourage similar paths to have similar path representations.
For the testing phase, we choose a candidate entity $c_i$ with the maximum probability $P(r|q, c_i)$ as an answer entity, and choose a path $\pi _j$ with the maximum attention value $\alpha _j$ as NLDs. To generate NLDs, we simply traverse the path from $q$ to $c_i$ and subsequently concatenate all entities and textual relations as one string. We output Unanswerable when (i) $\max _{c_i \in C} P(r|q, c_i) < \epsilon _k$ or (ii) $G(S)$ has no path between $q$ and all $c_i \in C$.
<<</Path ranking-based KGC (PRKGC)>>>
<<<Training>>>
Let $\mathcal {K}^+$ be a set of question-answer pairs, where each instance consists of a triplet (a query entity $q_i$, a relation $r_i$, an answer entity $a_i$). Similarly, let $\mathcal {K}^-$ be a set of question-non-answer pairs. We minimize the following binary cross-entropy loss:
From the NLD point of view, this is unsupervised training. The model is expected to learn the score function ${\rm sc(\cdot )}$ to give higher scores to paths (i.e. NLD steps) that are useful for discriminating correct answers from wrong answers by its own. Highly scored NLDs might be useful for answer classification, but these are not guaranteed to be interpretable to humans.
<<<Semi-supervising derivations>>>
To address the above issue, we resort to gold-standard NLDs to guide the path scoring function ${\rm sc(\cdot )}$. Let $\mathcal {D}$ be question-answer pairs coupled with gold-standard NLDs, namely a binary vector $\mathbf {p}_i$, where the $j$-th value represents whether $j$-th path corresponds to a gold-standard NLD (1) or not (0). We apply the following cross-entropy loss to the path attention:
<<</Semi-supervising derivations>>>
<<</Training>>>
<<</Baseline RC-QED@!START@$^{\rm E}$@!END@ model>>>
<<<Experiments>>>
<<<Settings>>>
<<<Hyperparameters>>>
We used 100-dimensional vectors for entities, relations, and textual relation representations. We initialize these representations with 100-dimensional Glove Embeddings BIBREF14 and fine-tuned them during training. We retain only top-100,000 frequent words as a model vocabulary. We used Bi-LSTM with 50 dimensional hidden state as a textual relation encoder, and an LSTM with 100-dimensional hidden state as an entity encoder. We used the Adam optimizer (default parameters) BIBREF15 with a batch size of 32. We set the answerability threshold $\epsilon _k = 0.5$.
<<</Hyperparameters>>>
<<<Baseline>>>
To check the integrity of the PRKGC model, we created a simple baseline model (shortest path model). It outputs a candidate entity with the shortest path length from a query entity on $G(S)$ as an answer. Similarly to the PRKGC model, it traverses the path to generate NLDs. It outputs Unanswerable if (i) a query entity is not reachable to any candidate entities on $G(S)$ or (ii) the shortest path length is more than 3.
<<</Baseline>>>
<<</Settings>>>
<<<Results and discussion>>>
As shown in Table TABREF37, the PRKGC models learned to reason over more than simple shortest paths. Yet, the PRKGC model do not give considerably good results, which indicates the non-triviality of RC-QED$^{\rm E}$. Although the PRKGC model do not receive supervision about human-generated NLDs, paths with the maximum score match human-generated NLDs to some extent.
Supervising path attentions (the PRKGC+NS model) is indeed effective for improving the human interpretability of generated NLDs. It also improves the generalization ability of question answering. We speculate that $L_d$ functions as a regularizer, which helps models to learn reasoning that helpful beyond training data. This observation is consistent with previous work where an evidence selection task is learned jointly with a main task BIBREF11, BIBREF2, BIBREF5.
As shown in Table TABREF43, as the required derivation step increases, the PRKGC+NS model suffers from predicting answer entities and generating correct NLDs. This indicates that the challenge of RC-QED$^{\rm E}$ is in how to extract relevant information from supporting documents and synthesize these multiple facts to derive an answer.
To obtain further insights, we manually analyzed generated NLDs. Table TABREF44 (a) illustrates a positive example, where the model identifies that altudoceras belongs to pseudogastrioceratinae, and that pseudogastrioceratinae is a subfamily of paragastrioceratidae. Some supporting sentences are already similar to human-generated NLDs, thus simply extracting textual relations works well for some problems.
On the other hand, typical derivation error is from non-human readable textual relations. In (b), the model states that bumped has a relationship of “,” with hands up, which is originally extracted from one of supporting sentences It contains the UK Top 60 singles “Bumped”, “Hands Up (4 Lovers)” and .... This provides a useful clue for answer prediction, but is not suitable as a derivation. One may address this issue by incorporating, for example, a relation extractor or a paraphrasing mechanism using recent advances of conditional language models BIBREF20.
<<<QA performance.>>>
To check the integrity of our baseline models, we compare our baseline models with existing neural models tailored for QA under the pure WikiHop setting (i.e. evaluation with only an accuracy of predicted answers). Note that these existing models do not output derivations. We thus cannot make a direct comparison, so it servers as a reference purpose. Because WikiHop has no answerability task, we enforced the PRKGC model to always output answers. As shown in Table TABREF45, the PRKGC models achieve a comparable performance to other sophisticated neural models.
<<</QA performance.>>>
<<</Results and discussion>>>
<<</Experiments>>>
<<<Related work>>>
<<<RC datasets with explanations>>>
There exists few RC datasets annotated with explanations (Table TABREF50). The most similar work to ours is Science QA dataset BIBREF21, BIBREF22, BIBREF23, which provides a small set of NLDs annotated for analysis purposes. By developing the scalable crowdsourcing framework, our work provides one order-of-magnitude larger NLDs which can be used as a benchmark more reliably. In addition, it provides the community with new types of challenges not included in HotpotQA.
<<</RC datasets with explanations>>>
<<<Analysis of RC models and datasets>>>
There is a large body of work on analyzing the nature of RC datasets, motivated by the question to what degree RC models understand natural language BIBREF3, BIBREF4. Several studies suggest that current RC datasets have unintended bias, which enables RC systems to rely on a cheap heuristics to answer questions. For instance, Sugawara2018 show that some of these RC datasets contain a large number of “easy” questions that can be solved by a cheap heuristics (e.g. by looking at a first few tokens of questions). Responding to their findings, we take a step further and explore the new task of RC that requires RC systems to give introspective explanations as well as answers. In addition, recent studies show that current RC models and NLP models are vulnerable to adversarial examples BIBREF29, BIBREF30, BIBREF31. Explicit modeling of NLDs is expected to reguralize RC models, which could prevent RC models' strong dependence on unintended bias in training data (e.g. annotation artifact) BIBREF32, BIBREF8, BIBREF2, BIBREF5, as partially confirmed in Section SECREF46.
<<</Analysis of RC models and datasets>>>
<<<Other NLP corpora annotated with explanations>>>
There are existing NLP tasks that require models to output explanations (Table TABREF50). FEVER BIBREF25 requires a system to judge the “factness” of a claim as well as to identify justification sentences. As discussed earlier, we take a step further from justification explanations to provide new challenges for NLU.
Several datasets are annotated with introspective explanations, ranging from textual entailments BIBREF8 to argumentative texts BIBREF26, BIBREF27, BIBREF33. All these datasets offer the classification task of single sentences or sentence pairs. The uniqueness of our dataset is that it measures a machine's ability to extract relevant information from a set of documents and to build coherent logical reasoning steps.
<<</Other NLP corpora annotated with explanations>>>
<<</Related work>>>
<<<Conclusions>>>
Towards RC models that can perform correct reasoning, we have proposed RC-QED that requires a system to output its introspective explanations, as well as answers. Instantiating RC-QED with entity-based multi-hop QA (RC-QED$^{\rm E}$), we have created a large-scale corpus of NLDs. The developed crowdsourcing annotation framework can be used for annotating other QA datasets with derivations. Our experiments using two simple baseline models have demonstrated that RC-QED$^{\rm E}$ is a non-trivial task, and that it indeed provides a challenging task of extracting and synthesizing relevant facts from supporting documents. We will make the corpus of reasoning annotations and baseline systems publicly available at https://naoya-i.github.io/rc-qed/.
One immediate future work is to expand the annotation to non-entity-based multi-hop QA datasets such as HotpotQA BIBREF2. For modeling, we plan to incorporate a generative mechanism based on recent advances in conditional language modeling.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nTask formulation: RC-QED\nInput, output, and evaluation metrics\nRC-QED@!START@$^{\\rm E}$@!END@\nData collection for RC-QED@!START@$^{\\rm E}$@!END@\nCrowdsourcing interface\nJudgement task (Figure @!START@UID13@!END@).\nDerivation task (Figure @!START@UID14@!END@).\nDataset\nResults\nQuality\nAgreement\nBaseline RC-QED@!START@$^{\\rm E}$@!END@ model\nKnowledge graph construction\nPath ranking-based KGC (PRKGC)\nTraining\nSemi-supervising derivations\nExperiments\nSettings\nHyperparameters\nBaseline\nResults and discussion\nQA performance.\nRelated work\nRC datasets with explanations\nAnalysis of RC models and datasets\nOther NLP corpora annotated with explanations\nConclusions"
],
"type": "outline"
}
|
1912.05066
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Event Outcome Prediction using Sentiment Analysis and Crowd Wisdom in Microblog Feeds
<<<Abstract>>>
Sentiment Analysis of microblog feeds has attracted considerable interest in recent times. Most of the current work focuses on tweet sentiment classification. But not much work has been done to explore how reliable the opinions of the mass (crowd wisdom) in social network microblogs such as twitter are in predicting outcomes of certain events such as election debates. In this work, we investigate whether crowd wisdom is useful in predicting such outcomes and whether their opinions are influenced by the experts in the field. We work in the domain of multi-label classification to perform sentiment classification of tweets and obtain the opinion of the crowd. This learnt sentiment is then used to predict outcomes of events such as: US Presidential Debate winners, Grammy Award winners, Super Bowl Winners. We find that in most of the cases, the wisdom of the crowd does indeed match with that of the experts, and in cases where they don't (particularly in the case of debates), we see that the crowd's opinion is actually influenced by that of the experts.
<<</Abstract>>>
<<<Introduction>>>
Over the past few years, microblogs have become one of the most popular online social networks. Microblogging websites have evolved to become a source of varied kinds of information. This is due to the nature of microblogs: people post real-time messages about their opinions and express sentiment on a variety of topics, discuss current issues, complain, etc. Twitter is one such popular microblogging service where users create status messages (called “tweets"). With over 400 million tweets per day on Twitter, microblog users generate large amount of data, which cover very rich topics ranging from politics, sports to celebrity gossip. Because the user generated content on microblogs covers rich topics and expresses sentiment/opinions of the mass, mining and analyzing this information can prove to be very beneficial both to the industrial and the academic community. Tweet classification has attracted considerable attention because it has become very important to analyze peoples' sentiments and opinions over social networks.
Most of the current work on analysis of tweets is focused on sentiment analysis BIBREF0, BIBREF1, BIBREF2. Not much has been done on predicting outcomes of events based on the tweet sentiments, for example, predicting winners of presidential debates based on the tweets by analyzing the users' sentiments. This is possible intuitively because the sentiment of the users in their tweets towards the candidates is proportional to the performance of the candidates in the debate.
In this paper, we analyze three such events: 1) US Presidential Debates 2015-16, 2) Grammy Awards 2013, and 3) Super Bowl 2013. The main focus is on the analysis of the presidential debates. For the Grammys and the Super Bowl, we just perform sentiment analysis and try to predict the outcomes in the process. For the debates, in addition to the analysis done for the Grammys and Super Bowl, we also perform a trend analysis. Our analysis of the tweets for the debates is 3-fold as shown below.
Sentiment: Perform a sentiment analysis on the debates. This involves: building a machine learning model which learns the sentiment-candidate pair (candidate is the one to whom the tweet is being directed) from the training data and then using this model to predict the sentiment-candidate pairs of new tweets.
Predicting Outcome: Here, after predicting the sentiment-candidate pairs on the new data, we predict the winner of the debates based on the sentiments of the users.
Trends: Here, we analyze certain trends of the debates like the change in sentiments of the users towards the candidates over time (hours, days, months) and how the opinion of experts such as Washington Post affect the sentiments of the users.
For the sentiment analysis, we look at our problem in a multi-label setting, our two labels being sentiment polarity and the candidate/category in consideration. We test both single-label classifiers and multi-label ones on the problem and as intuition suggests, the multi-label classifier RaKel performs better. A combination of document-embedding features BIBREF3 and topic features (essentially the document-topic probabilities) BIBREF4 is shown to give the best results. These features make sense intuitively because the document-embedding features take context of the text into account, which is important for sentiment polarity classification, and topic features take into account the topic of the tweet (who/what is it about).
The prediction of outcomes of debates is very interesting in our case. Most of the results seem to match with the views of some experts such as the political pundits of the Washington Post. This implies that certain rules that were used to score the candidates in the debates by said-experts were in fact reflected by reading peoples' sentiments expressed over social media. This opens up a wide variety of learning possibilities from users' sentiments on social media, which is sometimes referred to as the wisdom of crowd.
We do find out that the public sentiments are not always coincident with the views of the experts. In this case, it is interesting to check whether the views of the experts can affect the public, for example, by spreading through the social media microblogs such as Twitter. Hence, we also conduct experiments to compare the public sentiment before and after the experts' views become public and thus notice the impact of the experts' views on the public sentiment. In our analysis of the debates, we observe that in certain debates, such as the 5th Republican Debate, held on December 15, 2015, the opinions of the users vary from the experts. But we see the effect of the experts on the sentiment of the users by looking at their opinions of the same candidates the next day.
Our contributions are mainly: we want to see how predictive the sentiment/opinion of the users are in social media microblogs and how it compares to that of the experts. In essence, we find that the crowd wisdom in the microblog domain matches that of the experts in most cases. There are cases, however, where they don't match but we observe that the crowd's sentiments are actually affected by the experts. This can be seen in our analysis of the presidential debates.
The rest of the paper is organized as follows: in section SECREF2, we review some of the literature. In section SECREF3, we discuss the collection and preprocessing of the data. Section SECREF4 details the approach taken, along with the features and the machine learning methods used. Section SECREF7 discusses the results of the experiments conducted and lastly section SECREF8 ends with a conclusion on the results including certain limitations and scopes for improvement to work on in the future.
<<</Introduction>>>
<<<Related Work>>>
Sentiment analysis as a Natural Language Processing task has been handled at many levels of granularity. Specifically on the microblog front, some of the early results on sentiment analysis are by BIBREF0, BIBREF1, BIBREF2, BIBREF5, BIBREF6. Go et al. BIBREF0 applied distant supervision to classify tweet sentiment by using emoticons as noisy labels. Kouloumpis et al. BIBREF7 exploited hashtags in tweets to build training data. Chenhao Tan et al. BIBREF8 determined user-level sentiments on particular topics with the help of the social network graph.
There has been some work in event detection and extraction in microblogs as well. In BIBREF9, the authors describe a way to extract major life events of a user based on tweets that either congratulate/offer condolences. BIBREF10 build a key-word graph from the data and then detect communities in this graph (cluster) to find events. This works because words that describe similar events will form clusters. In BIBREF11, the authors use distant supervision to extract events. There has also been some work on event retrieval in microblogs BIBREF12. In BIBREF13, the authors detect time points in the twitter stream when an important event happens and then classify such events based on the sentiments they evoke using only non-textual features to do so. In BIBREF14, the authors study how much of the opinion extracted from Online Social Networks (OSN) user data is reflective of the opinion of the larger population. Researchers have also mined Twitter dataset to analyze public reaction to various events: from election debate performance BIBREF15, where the authors demonstrate visuals and metrics that can be used to detect sentiment pulse, anomalies in that pulse, and indications of controversial topics that can be used to inform the design of visual analytic systems for social media events, to movie box-office predictions on the release day BIBREF16. Mishne and Glance BIBREF17 correlate sentiments in blog posts with movie box-office scores. The correlations they observed for positive sentiments are fairly low and not sufficient to use for predictive purposes. Recently, several approaches involving machine learning and deep learning have also been used in the visual and language domains BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24.
<<</Related Work>>>
<<<Data Set and Preprocessing>>>
<<<Data Collection>>>
Twitter is a social networking and microblogging service that allows users to post real-time messages, called tweets. Tweets are very short messages, a maximum of 140 characters in length. Due to such a restriction in length, people tend to use a lot of acronyms, shorten words etc. In essence, the tweets are usually very noisy. There are several aspects to tweets such as: 1) Target: Users use the symbol “@" in their tweets to refer to other users on the microblog. 2) Hashtag: Hashtags are used by users to mark topics. This is done to increase the visibility of the tweets.
We conduct experiments on 3 different datasets, as mentioned earlier: 1) US Presidential Debates, 2) Grammy Awards 2013, 3) Superbowl 2013. To construct our presidential debates dataset, we have used the Twitter Search API to collect the tweets. Since there was no publicly available dataset for this, we had to collect the data manually. The data was collected on 10 different presidential debates: 7 republican and 3 democratic, from October 2015 to March 2016. Different hashtags like “#GOP, #GOPDebate” were used to filter out tweets specific to the debate. This is given in Table TABREF2. We extracted only english tweets for our dataset. We collected a total of 104961 tweets were collected across all the debates. But there were some limitations with the API. Firstly, the server imposes a rate limit and discards tweets when the limit is reached. The second problem is that the API returns many duplicates. Thus, after removing the duplicates and irrelevant tweets, we were left with a total of 17304 tweets. This includes the tweets only on the day of the debate. We also collected tweets on the days following the debate.
As for the other two datasets, we collected them from available-online repositories. There were a total of 2580062 tweets for the Grammy Awards 2013, and a total of 2428391 tweets for the Superbowl 2013. The statistics are given in Tables TABREF3 and TABREF3. The tweets for the grammy were before the ceremony and during. However, we only use the tweets before the ceremony (after the nominations were announced), to predict the winners. As for the superbowl, the tweets collected were during the game. But we can predict interesting things like Most Valuable Player etc. from the tweets. The tweets for both of these datasets were annotated and thus did not require any human intervention. However, the tweets for the debates had to be annotated.
Since we are using a supervised approach in this paper, we have all the tweets (for debates) in the training set human-annotated. The tweets were already annotated for the Grammys and Super Bowl. Some statistics about our datasets are presented in Tables TABREF3, TABREF3 and TABREF3. The annotations for the debate dataset comprised of 2 labels for each tweet: 1) Candidate: This is the candidate of the debate to whom the tweet refers to, 2) Sentiment: This represents the sentiment of the tweet towards that candidate. This is either positive or negative.
The task then becomes a case of multi-label classification. The candidate labels are not so trivial to obtain, because there are tweets that do not directly contain any candidates' name. For example, the tweets, “a business man for president??” and “a doctor might sure bring about a change in America!” are about Donal Trump and Ben Carson respectively. Thus, it is meaningful to have a multi-label task.
The annotations for the other two datasets are similar, in that one of the labels was the sentiment and the other was category-dependent in the outcome-prediction task, as discussed in the sections below. For example, if we are trying to predict the "Album of the Year" winners for the Grammy dataset, the second label would be the nominees for that category (album of the year).
<<</Data Collection>>>
<<<Preprocessing>>>
As noted earlier, tweets are generally noisy and thus require some preprocessing done before using them. Several filters were applied to the tweets such as: (1) Usernames: Since users often include usernames in their tweets to direct their message, we simplify it by replacing the usernames with the token “USER”. For example, @michael will be replaced by USER. (2) URLs: In most of the tweets, users include links that add on to their text message. We convert/replace the link address to the token “URL”. (3) Repeated Letters: Oftentimes, users use repeated letters in a word to emphasize their notion. For example, the word “lol” (which stands for “laugh out loud”) is sometimes written as “looooool” to emphasize the degree of funnyness. We replace such repeated occurrences of letters (more than 2), with just 3 occurrences. We replace with 3 occurrences and not 2, so that we can distinguish the exaggerated usage from the regular ones. (4) Multiple Sentiments: Tweets which contain multiple sentiments are removed, such as "I hate Donald Trump, but I will vote for him". This is done so that there is no ambiguity. (5) Retweets: In Twitter, many times tweets of a person are copied and posted by another user. This is known as retweeting and they are commonly abbreviated with “RT”. These are removed and only the original tweets are processed. (6) Repeated Tweets: The Twitter API sometimes returns a tweet multiple times. We remove such duplicates to avoid putting extra weight on any particular tweet.
<<</Preprocessing>>>
<<</Data Set and Preprocessing>>>
<<<Methodology>>>
<<<Procedure>>>
Our analysis of the debates is 3-fold including sentiment analysis, outcome prediction, and trend analysis.
Sentiment Analysis: To perform a sentiment analysis on the debates, we first extract all the features described below from all the tweets in the training data. We then build the different machine learning models described below on these set of features. After that, we evaluate the output produced by the models on unseen test data. The models essentially predict candidate-sentiment pairs for each tweet.
Outcome Prediction: Predict the outcome of the debates. After obtaining the sentiments on the test data for each tweet, we can compute the net normalized sentiment for each presidential candidate in the debate. This is done by looking at the number of positive and negative sentiments for each candidate. We then normalize the sentiment scores of each candidate to be in the same scale (0-1). After that, we rank the candidates based on the sentiment scores and predict the top $k$ as the winners.
Trend Analysis: We also analyze some certain trends of the debates. Firstly, we look at the change in sentiments of the users towards the candidates over time (hours, days, months). This is done by computing the sentiment scores for each candidate in each of the debates and seeing how it varies over time, across debates. Secondly, we examine the effect of Washington Post on the views of the users. This is done by looking at the sentiments of the candidates (to predict winners) of a debate before and after the winners are announced by the experts in Washington Post. This way, we can see if Washington Post has had any effect on the sentiments of the users. Besides that, to study the behavior of the users, we also look at the correlation of the tweet volume with the number of viewers as well as the variation of tweet volume over time (hours, days, months) for debates.
As for the Grammys and the Super Bowl, we only perform the sentiment analysis and predict the outcomes.
<<</Procedure>>>
<<<Machine Learning Models>>>
We compare 4 different models for performing our task of sentiment classification. We then pick the best performing model for the task of outcome prediction. Here, we have two categories of algorithms: single-label and multi-label (We already discussed above why it is meaningful to have a multi-label task earlier), because one can represent $<$candidate, sentiment$>$ as a single class label or have candidate and sentiment as two separate labels. They are listed below:
<<<Single-label Classification>>>
Naive Bayes: We use a multinomial Naive Bayes model. A tweet $t$ is assigned a class $c^{*}$ such that
where there are $m$ features and $f_i$ represents the $i^{th}$ feature.
Support Vector Machines: Support Vector Machines (SVM) constructs a hyperplane or a set of hyperplanes in a high-dimensional space, which can then be used for classification. In our case, we use SVM with Sequential Minimal Optimization (SMO) BIBREF25, which is an algorithm for solving the quadratic programming (QP) problem that arises during the training of SVMs.
Elman Recurrent Neural Network: Recurrent Neural Networks (RNNs) are gaining popularity and are being applied to a wide variety of problems. They are a class of artificial neural networks, where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. The Elman RNN was proposed by Jeff Elman in the year 1990 BIBREF26. We use this in our task.
<<</Single-label Classification>>>
<<<Multi-label Classification>>>
RAkEL (RAndom k labELsets): RAkEL BIBREF27 is a multi-label classification algorithm that uses labeled powerset (LP) transformation: it basically creates a single binary classifier for every label combination and then uses multiple LP classifiers, each trained on a random subset of the actual labels, for classification.
<<</Multi-label Classification>>>
<<</Machine Learning Models>>>
<<<Feature Space>>>
In order to classify the tweets, a set of features is extracted from each of the tweets, such as n-gram, part-of-speech etc. The details of these features are given below:
n-gram: This represents the frequency counts of n-grams, specifically that of unigrams and bigrams.
punctuation: The number of occurrences of punctuation symbols such as commas, exclamation marks etc.
POS (part-of-speech): The frequency of each POS tagger is used as the feature.
prior polarity scoring: Here, we obtain the prior polarity of the words BIBREF6 using the Dictionary of Affect in Language (DAL) BIBREF28. This dictionary (DAL) of about 8000 English words assigns a pleasantness score to each word on a scale of 1-3. After normalizing, we can assign the words with polarity higher than $0.8$ as positive and less than $0.5$ as negative. If a word is not present in the dictionary, we lookup its synonyms in WordNet: if this word is there in the dictionary, we assign the original word its synonym's score.
Twitter Specific features:
Number of hashtags ($\#$ symbol)
Number of mentioning users ( symbol)
Number of hyperlinks
Document embedding features: Here, we use the approach proposed by Mikolov et al. BIBREF3 to embed an entire tweet into a vector of features
Topic features: Here, LDA (Latent Dirichlet Allocation) BIBREF4 is used to extract topic-specific features for a tweet (document). This is basically the topic-document probability that is outputted by the model.
In the following experiments, we use 1-$gram$, 2-$gram$ and $(1+2)$-$gram$ to denote unigram, bigram and a combination of unigram and bigram features respectively. We also combine punctuation and the other features as miscellaneous features and use $MISC$ to denote this. We represent the document-embedding features by $DOC$ and topic-specific features by $TOPIC$.
<<</Feature Space>>>
<<</Methodology>>>
<<<Data Analysis>>>
In this section, we analyze the presidential debates data and show some trends.
First, we look at the trend of the tweet frequency. Figure FIGREF21 shows the trends of the tweet frequency and the number of TV viewers as the debates progress over time. We observe from Figures FIGREF21 and FIGREF21 that for the first 5 debates considered, the trend of the number of TV viewers matches the trend of the number of tweets. However, we can see that towards the final debates, the frequency of the tweets decreases consistently. This shows an interesting fact that although the people still watch the debates, the number of people who tweet about it are greatly reduced. But the tweeting community are mainly youngsters and this shows that most of the tweeting community, who actively tweet, didn't watch the later debates. Because if they did, then the trends should ideally match.
Next we look at how the tweeting activity is on days of the debate: specifically on the day of the debate, the next day and two days later. Figure FIGREF22 shows the trend of the tweet frequency around the day of the 5th republican debate, i.e December 15, 2015. As can be seen, the average number of people tweet more on the day of the debate than a day or two after it. This makes sense intuitively because the debate would be fresh in their heads.
Then, we look at how people tweet in the hours of the debate: specifically during the debate, one hour after and then two hours after. Figure FIGREF23 shows the trend of the tweet frequency around the hour of the 5th republican debate, i.e December 15, 2015. We notice that people don't tweet much during the debate but the activity drastically increases after two hours. This might be because people were busy watching the debate and then taking some time to process things, so that they can give their opinion.
We have seen the frequency of tweets over time in the previous trends. Now, we will look at how the sentiments of the candidates change over time.
First, Figure FIGREF24 shows how the sentiments of the candidates changed across the debates. We find that many of the candidates have had ups and downs towards in the debates. But these trends are interesting in that, it gives some useful information about what went down in the debate that caused the sentiments to change (sometimes drastically). For example, if we look at the graph for Donald Trump, we see that his sentiment was at its lowest point during the debate held on December 15. Looking into the debate, we can easily see why this was the case. At a certain point in the debate, Trump was asked about his ideas for the nuclear triad. It is very important that a presidential candidate knows about this, but Trump had no idea what the nuclear triad was and, in a transparent attempt to cover his tracks, resorted to a “we need to be strong" speech. It can be due to this embarrassment that his sentiment went down during this debate.
Next, we investigate how the sentiments of the users towards the candidates change before and after the debate. In essence, we examine how the debate and the results of the debates given by the experts affects the sentiment of the candidates. Figure FIGREF25 shows the sentiments of the users towards the candidate during the 5th Republican Debate, 15th December 2015. It can be seen that the sentiments of the users towards the candidates does indeed change over the course of two days. One particular example is that of Jeb Bush. It seems that the populace are generally prejudiced towards the candidates, which is reflected in their sentiments of the candidates on the day of the debate. The results of the Washington Post are released in the morning after the debate. One can see the winners suggested by the Washington Post in Table TABREF35. One of the winners in that debate according to them is Jeb Bush. Coincidentally, Figure FIGREF25 suggests that the sentiment of Bush has gone up one day after the debate (essentially, one day after the results given by the experts are out).
There is some influence, for better or worse, of these experts on the minds of the users in the early debates, but towards the final debates the sentiments of the users are mostly unwavering, as can be seen in Figure FIGREF25. Figure FIGREF25 is on the last Republican debate, and suggests that the opinions of the users do not change much with time. Essentially the users have seen enough debates to make up their own minds and their sentiments are not easily wavered.
<<</Data Analysis>>>
<<<Evaluation Metrics>>>
In this section, we define the different evaluation metrics that we use for different tasks. We have two tasks at hand: 1) Sentiment Analysis, 2) Outcome Prediction. We use different metrics for these two tasks.
<<<Sentiment Analysis>>>
In the study of sentiment analysis, we use “Hamming Loss” to evaluate the performance of the different methods. Hamming Loss, based on Hamming distance, takes into account the prediction error and the missing error, normalized over the total number of classes and total number of examples BIBREF29. The Hamming Loss is given below:
where $|D|$ is the number of examples in the dataset and $|L|$ is the number of labels. $S_i$ and $Y_i$ denote the sets of true and predicted labels for instance $i$ respectively. $\oplus $ stands for the XOR operation BIBREF30. Intuitively, the performance is better, when the Hamming Loss is smaller. 0 would be the ideal case.
<<</Sentiment Analysis>>>
<<<Outcome Prediction>>>
For the case of outcome prediction, we will have a predicted set and an actual set of results. Thus, we can use common information retrieval metrics to evaluate the prediction performance. Those metrics are listed below:
Mean F-measure: F-measure takes into account both the precision and recall of the results. In essence, it takes into account how many of the relevant results are returned and also how many of the returned results are relevant.
where $|D|$ is the number of queries (debates/categories for grammy winners etc.), $P_i$ and $R_i$ are the precision and recall for the $i^{th}$ query.
Mean Average Precision: As a standard metric used in information retrieval, Mean Average Precision for a set of queries is mean of the average precision scores for each query:
where $|D|$ is the number of queries (e.g., debates), $P_i(k)$ is the precision at $k$ ($P@k$) for the $i^{th}$ query, $rel_i(k)$ is an indicator function, equaling 1 if the document at position $k$ for the $i^th$ query is relevant, else 0, and $|RD_i|$ is the number of relevant documents for the $i^{th}$ query.
<<</Outcome Prediction>>>
<<</Evaluation Metrics>>>
<<<Results>>>
<<<Results for Outcome Prediction>>>
In this section, we show the results for the outcome-prediction of the events. RaKel, as the best performing method, is trained to predict the sentiment-labels for the unlabeled data. The sentiment labels are then used to determine the outcome of the events. In the Tables (TABREF35, TABREF36, TABREF37) of outputs given, we only show as many predictions as there are winners.
<<<Presidential Debates>>>
The results obtained for the outcome prediction task for the US presidential debates is shown in Table TABREF35. Table TABREF35 shows the winners as given in the Washington Post (3rd column) and the winners that are predicted by our system (2nd column). By comparing the set of results obtained from both the sources, we find that the set of candidates predicted match to a large extent with the winners given out by the Washington Post. The result suggests that the opinions of the social media community match with that of the journalists in most cases.
<<</Presidential Debates>>>
<<<Grammy Awards>>>
A Grammy Award is given to outstanding achievement in the music industry. There are two types of awards: “General Field” awards, which are not restricted by genre, and genre-specific awards. Since, there can be upto 80 categories of awards, we only focus on the main 4: 1) Album of the Year, 2) Record of the Year, 3) Song of the Year, and 4) Best New Artist. These categories are the main in the awards ceremony and most looked forward to. That is also why we choose to predict the outcomes of these categories based on the tweets. We use the tweets before the ceremony (but after the nominations) to predict the outcomes.
Basically, we have a list of nominations for each category. We filter the tweets based on these nominations and then predict the winner as with the case of the debates. The outcomes are listed in Table TABREF36. We see that largely, the opinion of the users on the social network, agree with the deciding committee of the awards. The winners agree for all the categories except “Song of the Year”.
<<</Grammy Awards>>>
<<<Super Bowl>>>
The Super Bowl is the annual championship game of the National Football League. We have collected the data for the year 2013. Here, the match was between the Baltimore Ravens and the San Francisco 49ers. The tweets that we have collected are during the game. From these tweets, one could trivially determine the winner. But an interesting outcome would be to predict the Most Valuable Player (MVP) during the game. To determine this, all the tweets were looked at and we determined the candidate with the highest positive sentiment by the end of the game. The result in Table TABREF37 suggests that we are able to determine the outcomes accurately.
Table TABREF43 displays some evaluation metrics for this task. These were computed based on the predicted outcomes and the actual outcomes for each of the different datasets. Since the number of participants varies from debate-to-debate or category-to-category for Grammy etc., we cannot return a fixed number of winners for everything. So, the size of our returned ranked-list is set to half of the number of participants (except for the MVP for Super Bowl; there are so many players and returning half of them when only one of them is relevant is meaningless. So, we just return the top 10 players). As we can see from the metrics, the predicted outcomes match quite well with the actual ones (or the ones given by the experts).
<<</Super Bowl>>>
<<</Results for Outcome Prediction>>>
<<</Results>>>
<<<Conclusions>>>
This paper presents a study that compares the opinions of users on microblogs, which is essentially the crowd wisdom, to that of the experts in the field. Specifically, we explore three datasets: US Presidential Debates 2015-16, Grammy Awards 2013, Super Bowl 2013. We determined if the opinions of the crowd and the experts match by using the sentiments of the tweets to predict the outcomes of the debates/Grammys/Super Bowl. We observed that in most of the cases, the predictions were right indicating that crowd wisdom is indeed worth looking at and mining sentiments in microblogs is useful. In some cases where there were disagreements, however, we observed that the opinions of the experts did have some influence on the opinions of the users. We also find that the features that were most useful in our case of multi-label classification was a combination of the document-embedding and topic features.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nData Set and Preprocessing\nData Collection\nPreprocessing\nMethodology\nProcedure\nMachine Learning Models\nSingle-label Classification\nMulti-label Classification\nFeature Space\nData Analysis\nEvaluation Metrics\nSentiment Analysis\nOutcome Prediction\nResults\nResults for Outcome Prediction\nPresidential Debates\nGrammy Awards\nSuper Bowl\nConclusions"
],
"type": "outline"
}
|
1910.03891
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Learning High-order Structural and Attribute information by Knowledge Graph Attention Networks for Enhancing Knowledge Graph Embedding
<<<Abstract>>>
The goal of representation learning of knowledge graph is to encode both entities and relations into a low-dimensional embedding spaces. Many recent works have demonstrated the benefits of knowledge graph embedding on knowledge graph completion task, such as relation extraction. However, we observe that: 1) existing method just take direct relations between entities into consideration and fails to express high-order structural relationship between entities; 2) these methods just leverage relation triples of KGs while ignoring a large number of attribute triples that encoding rich semantic information. To overcome these limitations, this paper propose a novel knowledge graph embedding method, named KANE, which is inspired by the recent developments of graph convolutional networks (GCN). KANE can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework. Empirical results on three datasets show that KANE significantly outperforms seven state-of-arts methods. Further analysis verify the efficiency of our method and the benefits brought by the attention mechanism.
<<</Abstract>>>
<<<Introduction>>>
In the past decade, many large-scale Knowledge Graphs (KGs), such as Freebase BIBREF0, DBpedia BIBREF1 and YAGO BIBREF2 have been built to represent human complex knowledge about the real-world in the machine-readable format. The facts in KGs are usually encoded in the form of triples $(\textit {head entity}, relation, \textit {tail entity})$ (denoted $(h, r, t)$ in this study) through the Resource Description Framework, e.g.,$(\textit {Donald Trump}, Born In, \textit {New York City})$. Figure FIGREF2 shows the subgraph of knowledge graph about the family of Donald Trump. In many KGs, we can observe that some relations indicate attributes of entities, such as the $\textit {Born}$ and $\textit {Abstract}$ in Figure FIGREF2, and others indicates the relations between entities (the head entity and tail entity are real world entity). Hence, the relationship in KG can be divided into relations and attributes, and correspondingly two types of triples, namely relation triples and attribute triples BIBREF3. A relation triples in KGs represents relationship between entities, e.g.,$(\textit {Donald Trump},Father of, \textit {Ivanka Trump})$, while attribute triples denote a literal attribute value of an entity, e.g.,$(\textit {Donald Trump},Born, \textit {"June 14, 1946"})$.
Knowledge graphs have became important basis for many artificial intelligence applications, such as recommendation system BIBREF4, question answering BIBREF5 and information retrieval BIBREF6, which is attracting growing interests in both academia and industry communities. A common approach to apply KGs in these artificial intelligence applications is through embedding, which provide a simple method to encode both entities and relations into a continuous low-dimensional embedding spaces. Hence, learning distributional representation of knowledge graph has attracted many research attentions in recent years. TransE BIBREF7 is a seminal work in representation learning low-dimensional vectors for both entities and relations. The basic idea behind TransE is that the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds, which indicates $\textbf {h}+\textbf {r}\approx \textbf {t}$. This model provide a flexible way to improve the ability in completing the KGs, such as predicating the missing items in knowledge graph. Since then, several methods like TransH BIBREF8 and TransR BIBREF9, which represent the relational translation in other effective forms, have been proposed. Recent attempts focused on either incorporating extra information beyond KG triples BIBREF10, BIBREF11, BIBREF12, BIBREF13, or designing more complicated strategies BIBREF14, BIBREF15, BIBREF16.
While these methods have achieved promising results in KG completion and link predication, existing knowledge graph embedding methods still have room for improvement. First, TransE and its most extensions only take direct relations between entities into consideration. We argue that the high-order structural relationship between entities also contain rich semantic relationships and incorporating these information can improve model performance. For example the fact $\textit {Donald Trump}\stackrel{Father of}{\longrightarrow }\textit {Ivanka Trump}\stackrel{Spouse}{\longrightarrow }\textit {Jared Kushner} $ indicates the relationship between entity Donald Trump and entity Jared Kushner. Several path-based methods have attempted to take multiple-step relation paths into consideration for learning high-order structural information of KGs BIBREF17, BIBREF18. But note that huge number of paths posed a critical complexity challenge on these methods. In order to enable efficient path modeling, these methods have to make approximations by sampling or applying path selection algorithm. We argue that making approximations has a large impact on the final performance.
Second, to the best of our knowledge, most existing knowledge graph embedding methods just leverage relation triples of KGs while ignoring a large number of attribute triples. Therefore, these methods easily suffer from sparseness and incompleteness of knowledge graph. Even worse, structure information usually cannot distinguish the different meanings of relations and entities in different triples. We believe that these rich information encoded in attribute triples can help explore rich semantic information and further improve the performance of knowledge graph. For example, we can learn date of birth and abstraction from values of Born and Abstract about Donald Trump in Figure FIGREF2. There are a huge number of attribute triples in real KGs, for example the statistical results in BIBREF3 shows attribute triples are three times as many as relationship triples in English DBpedia (2016-04). Recent a few attempts try to incorporate attribute triples BIBREF11, BIBREF12. However, these are two limitations existing in these methods. One is that only a part of attribute triples are used in the existing methods, such as only entity description is used in BIBREF12. The other is some attempts try to jointly model the attribute triples and relation triples in one unified optimization problem. The loss of two kinds triples has to be carefully balanced during optimization. For example, BIBREF3 use hyper-parameters to weight the loss of two kinds triples in their models.
Considering limitations of existing knowledge graph embedding methods, we believe it is of critical importance to develop a model that can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner. Towards this end, inspired by the recent developments of graph convolutional networks (GCN) BIBREF19, which have the potential of achieving the goal but have not been explored much for knowledge graph embedding, we propose Knowledge Graph Attention Networks for Enhancing Knowledge Graph Embedding (KANE). The key ideal of KANE is to aggregate all attribute triples with bias and perform embedding propagation based on relation triples when calculating the representations of given entity. Specifically, two carefully designs are equipped in KANE to correspondingly address the above two challenges: 1) recursive embedding propagation based on relation triples, which updates a entity embedding. Through performing such recursively embedding propagation, the high-order structural information of kGs can be successfully captured in a linear time complexity; and 2) multi-head attention-based aggregation. The weight of each attribute triples can be learned through applying the neural attention mechanism BIBREF20.
In experiments, we evaluate our model on two KGs tasks including knowledge graph completion and entity classification. Experimental results on three datasets shows that our method can significantly outperforms state-of-arts methods.
The main contributions of this study are as follows:
1) We highlight the importance of explicitly modeling the high-order structural and attribution information of KGs to provide better knowledge graph embedding.
2) We proposed a new method KANE, which achieves can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework.
3) We conduct experiments on three datasets, demonstrating the effectiveness of KANE and its interpretability in understanding the importance of high-order relations.
<<</Introduction>>>
<<<Related Work>>>
In recent years, there are many efforts in Knowledge Graph Embeddings for KGs aiming to encode entities and relations into a continuous low-dimensional embedding spaces. Knowledge Graph Embedding provides a very simply and effective methods to apply KGs in various artificial intelligence applications. Hence, Knowledge Graph Embeddings has attracted many research attentions in recent years. The general methodology is to define a score function for the triples and finally learn the representations of entities and relations by minimizing the loss function $f_r(h,t)$, which implies some types of transformations on $\textbf {h}$ and $\textbf {t}$. TransE BIBREF7 is a seminal work in knowledge graph embedding, which assumes the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ when $(h, r, t)$ holds as mentioned in section “Introduction". Hence, TransE defines the following loss function:
TransE regarding the relation as a translation between head entity and tail entity is inspired by the word2vec BIBREF21, where relationships between words often correspond to translations in latent feature space. This model achieves a good trade-off between computational efficiency and accuracy in KGs with thousands of relations. but this model has flaws in dealing with one-to-many, many-to-one and many-to-many relations.
In order to address this issue, TransH BIBREF8 models a relation as a relation-specific hyperplane together with a translation on it, allowing entities to have distinct representation in different relations. TransR BIBREF9 models entities and relations in separate spaces, i.e., entity space and relation spaces, and performs translation from entity spaces to relation spaces. TransD BIBREF22 captures the diversity of relations and entities simultaneously by defining dynamic mapping matrix. Recent attempts can be divided into two categories: (i) those which tries to incorporate additional information to further improve the performance of knowledge graph embedding, e.g., entity types or concepts BIBREF13, relations paths BIBREF17, textual descriptions BIBREF11, BIBREF12 and logical rules BIBREF23; (ii) those which tries to design more complicated strategies, e.g., deep neural network models BIBREF24.
Except for TransE and its extensions, some efforts measure plausibility by matching latent semantics of entities and relations. The basic idea behind these models is that the plausible triples of a KG is assigned low energies. For examples, Distant Model BIBREF25 defines two different projections for head and tail entity in a specific relation, i.e., $\textbf {M}_{r,1}$ and $\textbf {M}_{r,2}$. It represents the vectors of head and tail entity can be transformed by these two projections. The loss function is $f_r(h,t)=||\textbf {M}_{r,1}\textbf {h}-\textbf {M}_{r,2}\textbf {t}||_{1}$.
Our KANE is conceptually advantageous to existing methods in that: 1) it directly factors high-order relations into the predictive model in linear time which avoids the labor intensive process of materializing paths, thus is more efficient and convenient to use; 2) it directly encodes all attribute triples in learning representation of entities which can capture rich semantic information and further improve the performance of knowledge graph embedding, and 3) KANE can directly factors high-order relations and attribute information into the predictive model in an efficient, explicit and unified manner, thus all related parameters are tailored for optimizing the embedding objective.
<<</Related Work>>>
<<<Problem Formulation>>>
In this study, wo consider two kinds of triples existing in KGs: relation triples and attribute triples. Relation triples denote the relation between entities, while attribute triples describe attributes of entities. Both relation and attribute triples denotes important information about entity, we will take both of them into consideration in the task of learning representation of entities. We let $I $ denote the set of IRIs (Internationalized Resource Identifier), $B $ are the set of blank nodes, and $L $ are the set of literals (denoted by quoted strings). The relation triples and attribute triples can be formalized as follows:
Definition 1. Relation and Attribute Triples: A set of Relation triples $ T_{R} $ can be represented by $ T_{R} \subset E \times R \times E $, where $E \subset I \cup B $ is set of entities, $R \subset I$ is set of relations between entities. Similarly, $ T_{A} \subset E \times R \times A $ is the set of attribute triples, where $ A \subset I \cup B \cup L $ is the set of attribute values.
Definition 2. Knowledge Graph: A KG consists of a combination of relation triples in the form of $ (h, r, t)\in T_{R} $, and attribute triples in form of $ (h, r, a)\in T_{A} $. Formally, we represent a KG as $G=(E,R,A,T_{R},T_{A})$, where $E=\lbrace h,t|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $ is set of entities, $R =\lbrace r|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $ is set of relations, $A=\lbrace a|(h,r,a)\in T_{A}\rbrace $, respectively.
The purpose of this study is try to use embedding-based model which can capture both high-order structural and attribute information of KGs that assigns a continuous representations for each element of triples in the form $ (\textbf {h}, \textbf {r}, \textbf {t})$ and $ (\textbf {h}, \textbf {r}, \textbf {a})$, where Boldfaced $\textbf {h}\in \mathbb {R}^{k}$, $\textbf {r}\in \mathbb {R}^{k}$, $\textbf {t}\in \mathbb {R}^{k}$ and $\textbf {a}\in \mathbb {R}^{k}$ denote the embedding vector of head entity $h$, relation $r$, tail entity $t$ and attribute $a$ respectively.
Next, we detail our proposed model which models both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework.
<<</Problem Formulation>>>
<<<Proposed Model>>>
In this section, we present the proposed model in detail. We first introduce the overall framework of KANE, then discuss the input embedding of entities, relations and values in KGs, the design of embedding propagation layers based on graph attention network and the loss functions for link predication and entity classification task, respectively.
<<<Overall Architecture>>>
The process of KANE is illustrated in Figure FIGREF2. We introduce the architecture of KANE from left to right. As shown in Figure FIGREF2, the whole triples of knowledge graph as input. The task of attribute embedding lays is embedding every value in attribute triples into a continuous vector space while preserving the semantic information. To capture both high-order structural information of KGs, we used an attention-based embedding propagation method. This method can recursively propagate the embeddings of entities from an entity's neighbors, and aggregate the neighbors with different weights. The final embedding of entities, relations and values are feed into two different deep neural network for two different tasks including link predication and entity classification.
<<</Overall Architecture>>>
<<<Attribute Embedding Layer>>>
The value in attribute triples usually is sentence or a word. To encode the representation of value from its sentence or word, we need to encode the variable-length sentences to a fixed-length vector. In this study, we adopt two different encoders to model the attribute value.
Bag-of-Words Encoder. The representation of attribute value can be generated by a summation of all words embeddings of values. We denote the attribute value $a$ as a word sequence $a = w_{1},...,w_{n}$, where $w_{i}$ is the word at position $i$. The embedding of $\textbf {a}$ can be defined as follows.
where $\textbf {w}_{i}\in \mathbb {R}^{k}$ is the word embedding of $w_{i}$.
Bag-of-Words Encoder is a simple and intuitive method, which can capture the relative importance of words. But this method suffers in that two strings that contains the same words with different order will have the same representation.
LSTM Encoder. In order to overcome the limitation of Bag-of-Word encoder, we consider using LSTM networks to encoder a sequence of words in attribute value into a single vector. The final hidden state of the LSTM networks is selected as a representation of the attribute value.
where $f_{lstm}$ is the LSTM network.
<<</Attribute Embedding Layer>>>
<<<Embedding Propagation Layer>>>
Next we describe the details of recursively embedding propagation method building upon the architecture of graph convolution network. Moreover, by exploiting the idea of graph attention network, out method learn to assign varying levels of importance to entity in every entity's neighborhood and can generate attentive weights of cascaded embedding propagation. In this study, embedding propagation layer consists of two mainly components: attentive embedding propagation and embedding aggregation. Here, we start by describing the attentive embedding propagation.
Attentive Embedding Propagation: Considering an KG $G$, the input to our layer is a set of entities, relations and attribute values embedding. We use $\textbf {h}\in \mathbb {R}^{k}$ to denote the embedding of entity $h$. The neighborhood of entity $h$ can be described by $\mathcal {N}_{h} = \lbrace t,a|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $. The purpose of attentive embedding propagation is encode $\mathcal {N}_{h}$ and output a vector $\vec{\textbf {h}}$ as the new embedding of entity $h$.
In order to obtain sufficient expressive power, one learnable linear transformation $\textbf {W}\in \mathbb {R}^{k^{^{\prime }} \times k}$ is adopted to transform the input embeddings into higher level feature space. In this study, we take a triple $(h,r,t)$ as example and the output a vector $\vec{\textbf {h}}$ can be formulated as follows:
where $\pi (h,r,t)$ is attention coefficients which indicates the importance of entity's $t$ to entities $h$ .
In this study, the attention coefficients also control how many information being propagated from its neighborhood through the relation. To make attention coefficients easily comparable between different entities, the attention coefficient $\pi (h,r,t)$ can be computed using a softmax function over all the triples connected with $h$. The softmax function can be formulated as follows:
Hereafter, we implement the attention coefficients $\pi (h,r,t)$ through a single-layer feedforward neural network, which is formulated as follows:
where the leakyRelu is selected as activation function.
As shown in Equation DISPLAY_FORM13, the attention coefficient score is depend on the distance head entity $h$ and the tail entity $t$ plus the relation $r$, which follows the idea behind TransE that the embedding $\textbf {t}$ of head entity should be close to the tail entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds.
Embedding Aggregation. To stabilize the learning process of attention, we perform multi-head attention on final layer. Specifically, we use $m$ attention mechanism to execute the transformation of Equation DISPLAY_FORM11. A aggregators is needed to combine all embeddings of multi-head graph attention layer. In this study, we adapt two types of aggregators:
Concatenation Aggregator concatenates all embeddings of multi-head graph attention, followed by a nonlinear transformation:
where $\mathop {\Big |\Big |}$ represents concatenation, $ \pi (h,r,t)^{i}$ are normalized attention coefficient computed by the $i$-th attentive embedding propagation, and $\textbf {W}^{i}$ denotes the linear transformation of input embedding.
Averaging Aggregator sums all embeddings of multi-head graph attention and the output embedding in the final is calculated applying averaging:
In order to encode the high-order connectivity information in KGs, we use multiple embedding propagation layers to gathering the deep information propagated from the neighbors. More formally, the embedding of entity $h$ in $l$-th layers can be defined as follows:
After performing $L$ embedding propagation layers, we can get the final embedding of entities, relations and attribute values, which include both high-order structural and attribute information of KGs. Next, we discuss the loss functions of KANE for two different tasks and introduce the learning and optimization detail.
<<</Embedding Propagation Layer>>>
<<<Output Layer and Training Details>>>
Here, we introduce the learning and optimization details for our method. Two different loss functions are carefully designed fro two different tasks of KG, which include knowledge graph completion and entity classification. Next details of these two loss functions are discussed.
knowledge graph completion. This task is a classical task in knowledge graph representation learning community. Specifically, two subtasks are included in knowledge graph completion: entity predication and link predication. Entity predication aims to infer the impossible head/tail entities in testing datasets when one of them is missing, while the link predication focus on complete a triple when relation is missing. In this study, we borrow the idea of translational scoring function from TransE, which the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds, which indicates $d(h+r,t)= ||\textbf {h}+\textbf {r}- \textbf {t}||$. Specifically, we train our model using hinge-loss function, given formally as
where $\gamma >0$ is a margin hyper-parameter, $[x ]_{+}$ denotes the positive part of $x$, $T=T_{R} \cup T_{A}$ is the set of valid triples, and $T^{\prime }$ is set of corrupted triples which can be formulated as:
Entity Classification. For the task of entity classification, we simple uses a fully connected layers and binary cross-entropy loss (BCE) over sigmoid activation on the output of last layer. We minimize the binary cross-entropy on all labeled entities, given formally as:
where $E_{D}$ is the set of entities indicates have labels, $C$ is the dimension of the output features, which is equal to the number of classes, $y_{ej}$ is the label indicator of entity $e$ for $j$-th class, and $\sigma (x)$ is sigmoid function $\sigma (x) = \frac{1}{1+e^{-x}}$.
We optimize these two loss functions using mini-batch stochastic gradient decent (SGD) over the possible $\textbf {h}$, $\textbf {r}$, $\textbf {t}$, with the chin rule that applying to update all parameters. At each step, we update the parameter $\textbf {h}^{\tau +1}\leftarrow \textbf {h}^{\tau }-\lambda \nabla _{\textbf {h}}\mathcal {L}$, where $\tau $ labels the iteration step and $\lambda $ is the learning rate.
<<</Output Layer and Training Details>>>
<<</Proposed Model>>>
<<<Experiments>>>
<<<Date sets>>>
In this study, we evaluate our model on three real KG including two typical large-scale knowledge graph: Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph. First, we adapt a dataset extracted from Freebase, i.e., FB24K, which used by BIBREF26. Then, we collect extra entities and relations that from DBpedia which that they should have at least 100 mentions BIBREF7 and they could link to the entities in the FB24K by the sameAs triples. Finally, we build a datasets named as DBP24K. In addition, we build a game datasets from our game knowledge graph, named as Game30K. The statistics of datasets are listed in Table TABREF24.
<<</Date sets>>>
<<<Experiments Setting>>>
In evaluation, we compare our method with three types of models:
1) Typical Methods. Three typical knowledge graph embedding methods includes TransE, TransR and TransH are selected as baselines. For TransE, the dissimilarity measure is implemented with L1-norm, and relation as well as entity are replaced during negative sampling. For TransR, we directly use the source codes released in BIBREF9. In order for better performance, the replacement of relation in negative sampling is utilized according to the suggestion of author.
2) Path-based Methods. We compare our method with two typical path-based model include PTransE, and ALL-PATHS BIBREF18. PTransE is the first method to model relation path in KG embedding task, and ALL-PATHS improve the PTransE through a dynamic programming algorithm which can incorporate all relation paths of bounded length.
3) Attribute-incorporated Methods. Several state-of-art attribute-incorporated methods including R-GCN BIBREF24 and KR-EAR BIBREF26 are used to compare with our methods on three real datasets.
In addition, four variants of KANE which each of which correspondingly defines its specific way of computing the attribute value embedding and embedding aggregation are used as baseline in evaluation. In this study, we name four three variants as KANE (BOW+Concatenation), KANE (BOW+Average), and KANE (LSTM+Concatenation), KANE (LSTM+Average). Our method is learned with mini-batch SGD. As for hyper-parameters, we select batch size among {16, 32, 64, 128}, learning rate $\lambda $ for SGD among {0.1, 0.01, 0.001}. For a fair comparison, we also set the vector dimensions of all entity and relation to the same $k \in ${128, 258, 512, 1024}, the same dissimilarity measure $l_{1}$ or $l_{2}$ distance in loss function, and the same number of negative examples $n$ among {1, 10, 20, 40}. The training time on both data sets is limited to at most 400 epochs. The best models are selected by a grid search and early stopping on validation sets.
<<</Experiments Setting>>>
<<<Entity Classification>>>
<<<Evaluation Protocol.>>>
In entity classification, the aim is to predicate the type of entity. For all baseline models, we first get the entity embedding in different datasets through default parameter settings as in their original papers or implementations.Then, Logistic Regression is used as classifier, which regards the entity's embeddings as feature of classifier. In evaluation, we random selected 10% of training set as validation set and accuracy as evaluation metric.
<<</Evaluation Protocol.>>>
<<<Test Performance.>>>
Experimental results of entity classification on the test sets of all the datasets is shown in Table TABREF25. The results is clearly demonstrate that our proposed method significantly outperforms state-of-art results on accuracy for three datasets. For more in-depth performance analysis, we note: (1) Among all baselines, Path-based methods and Attribute-incorporated methods outperform three typical methods. This indicates that incorporating extra information can improve the knowledge graph embedding performance; (2) Four variants of KANE always outperform baseline methods. The main reasons why KANE works well are two fold: 1) KANE can capture high-order structural information of KGs in an efficient, explicit manner and passe these information to their neighboring; 2) KANE leverages rich information encoded in attribute triples. These rich semantic information can further improve the performance of knowledge graph; (3) The variant of KANE that use LSTM Encoder and Concatenation aggregator outperform other variants. The main reasons is that LSTM encoder can distinguish the word order and concatenation aggregator combine all embedding of multi-head attention in a higher leaver feature space, which can obtain sufficient expressive power.
<<</Test Performance.>>>
<<<Efficiency Evaluation.>>>
Figure FIGREF30 shows the test accuracy with increasing epoch on DBP24K and Game30K. We can see that test accuracy first rapidly increased in the first ten iterations, but reaches a stable stages when epoch is larger than 40. Figure FIGREF31 shows test accuracy with different embedding size and training data proportions. We can note that too small embedding size or training data proportions can not generate sufficient global information. In order to further analysis the embeddings learned by our method, we use t-SNE tool BIBREF27 to visualize the learned embedding. Figure FIGREF32 shows the visualization of 256 dimensional entity's embedding on Game30K learned by KANE, R-GCN, PransE and TransE. We observe that our method can learn more discriminative entity's embedding than other other methods.
<<</Efficiency Evaluation.>>>
<<</Entity Classification>>>
<<<Knowledge Graph Completion>>>
The purpose of knowledge graph completion is to complete a triple $(h, r, t)$ when one of $h, r, t$ is missing, which is used many literature BIBREF7. Two measures are considered as our evaluation metrics: (1) the mean rank of correct entities or relations (Mean Rank); (2) the proportion of correct entities or relations ranked in top1 (Hits@1, for relations) or top 10 (Hits@10, for entities). Following the setting in BIBREF7, we also adopt the two evaluation settings named "raw" and "filter" in order to avoid misleading behavior.
The results of entity and relation predication on FB24K are shown in the Table TABREF33. This results indicates that KANE still outperforms other baselines significantly and consistently. This also verifies the necessity of modeling high-order structural and attribute information of KGs in Knowledge graph embedding models.
<<</Knowledge Graph Completion>>>
<<</Experiments>>>
<<<Conclusion and Future Work>>>
Many recent works have demonstrated the benefits of knowledge graph embedding in knowledge graph completion, such as relation extraction. However, We argue that knowledge graph embedding method still have room for improvement. First, TransE and its most extensions only take direct relations between entities into consideration. Second, most existing knowledge graph embedding methods just leverage relation triples of KGs while ignoring a large number of attribute triples. In order to overcome these limitation, inspired by the recent developments of graph convolutional networks, we propose a new knowledge graph embedding methods, named KANE. The key ideal of KANE is to aggregate all attribute triples with bias and perform embedding propagation based on relation triples when calculating the representations of given entity. Empirical results on three datasets show that KANE significantly outperforms seven state-of-arts methods.
<<</Conclusion and Future Work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nProblem Formulation\nProposed Model\nOverall Architecture\nAttribute Embedding Layer\nEmbedding Propagation Layer\nOutput Layer and Training Details\nExperiments\nDate sets\nExperiments Setting\nEntity Classification\nEvaluation Protocol.\nTest Performance.\nEfficiency Evaluation.\nKnowledge Graph Completion\nConclusion and Future Work"
],
"type": "outline"
}
|
1909.13375
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Tag-based Multi-Span Extraction in Reading Comprehension
<<<Abstract>>>
With models reaching human performance on many popular reading comprehension datasets in recent years, a new dataset, DROP, introduced questions that were expected to present a harder challenge for reading comprehension models. Among these new types of questions were "multi-span questions", questions whose answers consist of several spans from either the paragraph or the question itself. Until now, only one model attempted to tackle multi-span questions as a part of its design. In this work, we suggest a new approach for tackling multi-span questions, based on sequence tagging, which differs from previous approaches for answering span questions. We show that our approach leads to an absolute improvement of 29.7 EM and 15.1 F1 compared to existing state-of-the-art results, while not hurting performance on other question types. Furthermore, we show that our model slightly eclipses the current state-of-the-art results on the entire DROP dataset.
<<</Abstract>>>
<<<Introduction>>>
The task of reading comprehension, where systems must understand a single passage of text well enough to answer arbitrary questions about it, has seen significant progress in the last few years. With models reaching human performance on the popular SQuAD dataset BIBREF0, and with much of the most popular reading comprehension datasets having been solved BIBREF1, BIBREF2, a new dataset, DROP BIBREF3, was recently published.
DROP aimed to present questions that require more complex reasoning in order to answer than that of previous datasets, in a hope to push the field towards a more comprehensive analysis of paragraphs of text. In addition to questions whose answers are a single continuous span from the paragraph text (questions of a type already included in SQuAD), DROP introduced additional types of questions. Among these new types were questions that require simple numerical reasoning, i.e questions whose answer is the result of a simple arithmetic expression containing numbers from the passage, and questions whose answers consist of several spans taken from the paragraph or the question itself, what we will denote as "multi-span questions".
Of all the existing models that tried to tackle DROP, only one model BIBREF4 directly targeted multi-span questions in a manner that wasn't just a by-product of the model's overall performance. In this paper, we propose a new method for tackling multi-span questions. Our method takes a different path from that of the aforementioned model. It does not try to generalize the existing approach for tackling single-span questions, but instead attempts to attack this issue with a new, tag-based, approach.
<<</Introduction>>>
<<<Related Work>>>
Numerically-aware QANet (NAQANet) BIBREF3 was the model released with DROP. It uses QANET BIBREF5, at the time the best-performing published model on SQuAD 1.1 BIBREF0 (without data augmentation or pretraining), as the encoder. On top of QANET, NAQANet adds four different output layers, which we refer to as "heads". Each of these heads is designed to tackle a specific question type from DROP, where these types where identified by DROP's authors post-creation of the dataset. These four heads are (1) Passage span head, designed for producing answers that consist of a single span from the passage. This head deals with the type of questions already introduced in SQuAD. (2) Question span head, for answers that consist of a single span from the question. (3) Arithmetic head, for answers that require adding or subtracting numbers from the passage. (4) Count head, for answers that require counting and sorting entities from the text. In addition, to determine which head should be used to predict an answer, a 4-way categorical variable, as per the number of heads, is trained. We denote this categorical variable as the "head predictor".
Numerically-aware BERT (NABERT+) BIBREF6 introduced two main improvements over NAQANET. The first was to replace the QANET encoder with BERT. This change alone resulted in an absolute improvement of more than eight points in both EM and F1 metrics. The second improvement was to the arithmetic head, consisting of the addition of "standard numbers" and "templates". Standard numbers were predefined numbers which were added as additional inputs to the arithmetic head, regardless of their occurrence in the passage. Templates were an attempt to enrich the head's arithmetic capabilities, by adding the ability of doing simple multiplications and divisions between up to three numbers.
MTMSN BIBREF4 is the first, and only model so far, that specifically tried to tackle the multi-span questions of DROP. Their approach consisted of two parts. The first was to train a dedicated categorical variable to predict the number of spans to extract. The second was to generalize the single-span head method of extracting a span, by utilizing the non-maximum suppression (NMS) algorithm BIBREF7 to find the most probable set of non-overlapping spans. The number of spans to extract was determined by the aforementioned categorical variable.
Additionally, MTMSN introduced two new other, non span-related, components. The first was a new "negation" head, meant to deal with questions deemed as requiring logical negation (e.g. "How many percent were not German?"). The second was improving the arithmetic head by using beam search to re-rank candidate arithmetic expressions.
<<</Related Work>>>
<<<Model>>>
Problem statement. Given a pair $(x^P,x^Q)$ of a passage and a question respectively, both comprised of tokens from a vocabulary $V$, we wish to predict an answer $y$. The answer could be either a collection of spans from the input, or a number, supposedly arrived to by performing arithmetic reasoning on the input. We want to estimate $p(y;x^P,x^Q)$.
The basic structure of our model is shared with NABERT+, which in turn is shared with that of NAQANET (the model initially released with DROP). Consequently, meticulously presenting every part of our model would very likely prove redundant. As a reasonable compromise, we will introduce the shared parts with more brevity, and will go into greater detail when presenting our contributions.
<<<NABERT+>>>
Assume there are $K$ answer heads in the model and their weights denoted by $\theta $. For each pair $(x^P,x^Q)$ we assume a latent categorical random variable $z\in \left\lbrace 1,\ldots \,K\right\rbrace $ such that the probability of an answer $y$ is
where each component of the mixture corresponds to an output head such that
Note that a head is not always capable of producing the correct answer $y_\text{gold}$ for each type of question, in which case $p\left(y_\text{gold} \vert z ; x^{P},x^{Q},\theta \right)=0$. For example, the arithmetic head, whose output is always a single number, cannot possibly produce a correct answer for a multi-span question.
For a multi-span question with an answer composed of $l$ spans, denote $y_{{\text{gold}}_{\textit {MS}}}=\left\lbrace y_{{\text{gold}}_1}, \ldots , y_{{\text{gold}}_l} \right\rbrace $. NAQANET and NABERT+ had no head capable of outputting correct answers for multi-span questions. Instead of ignoring them in training, both models settled on using "semi-correct answers": each $y_\text{gold} \in y_{{\text{gold}}_{\textit {MS}}}$ was considered to be a correct answer (only in training). By deliberately encouraging the model to provide partial answers for multi-span questions, they were able to improve the corresponding F1 score. As our model does have a head with the ability to answer multi-span questions correctly, we didn't provide the aforementioned semi-correct answers to any of the other heads. Otherwise, we would have skewed the predictions of the head predictor and effectively mislead the other heads to believe they could predict correct answers for multi-span questions.
<<<Heads Shared with NABERT+>>>
Before going over the answer heads, two additional components should be introduced - the summary vectors, and the head predictor.
Summary vectors. The summary vectors are two fixed-size learned representations of the question and the passage, which serve as an input for some of the heads. To create the summary vectors, first define $\mathbf {T}$ as BERT's output on a $(x^{P},x^{Q})$ input. Then, let $\mathbf {T}^{P}$ and $\mathbf {T}^{Q}$ be subsequences of T that correspond to $x^P$ and $x^Q$ respectively. Finally, let us also define Bdim as the dimension of the tokens in $\mathbf {T}$ (e.g 768 for BERTbase), and have $\mathbf {W}^P \in \mathbb {R}^\texttt {Bdim}$ and $\mathbf {W}^Q \in \mathbb {R}^\texttt {Bdim}$ as learned linear layers. Then, the summary vectors are computed as:
Head predictor. A learned categorical variable with its number of outcomes equal to the number of answer heads in the model. Used to assign probabilities for using each of the heads in prediction.
where FFN is a two-layer feed-forward network with RELU activation.
Passage span. Define $\textbf {W}^S \in \mathbb {R}^\texttt {Bdim}$ and $\textbf {W}^E \in \mathbb {R}^\texttt {Bdim}$ as learned vectors. Then the probabilities of the start and end positions of a passage span are computed as
Question span. The probabilities of the start and end positions of a question span are computed as
where $\textbf {e}^{|\textbf {T}^Q|}\otimes \textbf {h}^P$ repeats $\textbf {h}^P$ for each component of $\textbf {T}^Q$.
Count. Counting is treated as a multi-class prediction problem with the numbers 0-9 as possible labels. The label probabilities are computed as
Arithmetic. As in NAQNET, this head obtains all of the numbers from the passage, and assigns a plus, minus or zero ("ignore") for each number. As BERT uses wordpiece tokenization, some numbers are broken up into multiple tokens. Following NABERT+, we chose to represent each number by its first wordpiece. That is, if $\textbf {N}^i$ is the set of tokens corresponding to the $i^\text{th}$ number, we define a number representation as $\textbf {h}_i^N = \textbf {N}^i_0$.
The selection of the sign for each number is a multi-class prediction problem with options $\lbrace 0, +, -\rbrace $, and the probabilities for the signs are given by
As for NABERT+'s two additional arithmetic features, we decided on using only the standard numbers, as the benefits from using templates were deemed inconclusive. Note that unlike the single-span heads, which are related to our introduction of a multi-span head, the arithmetic and count heads were not intended to play a significant role in our work. We didn't aim to improve results on these types of questions, perhaps only as a by-product of improving the general reading comprehension ability of our model.
<<</Heads Shared with NABERT+>>>
<<</NABERT+>>>
<<<Multi-Span Head>>>
A subset of questions that wasn't directly dealt with by the base models (NAQANET, NABERT+) is questions that have an answer which is composed of multiple non-continuous spans. We suggest a head that will be able to deal with both single-span and multi-span questions.
To model an answer which is a collection of spans, the multi-span head uses the $\mathtt {BIO}$ tagging format BIBREF8: $\mathtt {B}$ is used to mark the beginning of a span, $\mathtt {I}$ is used to mark the inside of a span and $\mathtt {O}$ is used to mark tokens not included in a span. In this way, we get a sequence of chunks that can be decoded to a final answer - a collection of spans.
As words are broken up by the wordpiece tokenization for BERT, we decided on only considering the representation of the first sub-token of the word to tag, following the NER task from BIBREF2.
For the $i$-th token of an input, the probability to be assigned a $\text{tag} \in \left\lbrace {\mathtt {B},\mathtt {I},\mathtt {O}} \right\rbrace $ is computed as
<<</Multi-Span Head>>>
<<<Objective and Training>>>
To train our model, we try to maximize the log-likelihood of the correct answer $p(y_\text{gold};x^{P},x^{Q},\theta )$ as defined in Section SECREF2. If no head is capable of predicting the gold answer, the sample is skipped.
We enumerate over every answer head $z\in \left\lbrace \textit {PS}, \textit {QS}, \textit {C}, \textit {A}, \textit {MS}\right\rbrace $ (Passage Span, Question Span, Count, Arithmetic, Multi-Span) to compute each of the objective's addends:
Note that we are in a weakly supervised setup: the answer type is not given, and neither is the correct arithmetic expression required for deriving some answers. Therefore, it is possible that $y_\text{gold}$ could be derived by more than one way, even from the same head, with no indication of which is the "correct" one.
We use the weakly supervised training method used in NABERT+ and NAQANET. Based on BIBREF9, for each head we find all the executions that evaluate to the correct answer and maximize their marginal likelihood .
For a datapoint $\left(y, x^{P}, x^{Q} \right)$ let $\chi ^z$ be the set of all possible ways to get $y$ for answer head $z\in \left\lbrace \textit {PS}, \textit {QS}, \textit {C}, \textit {A}, \textit {MS}\right\rbrace $. Then, as in NABERT+, we have
Finally, for the arithmetic head, let $\mu $ be the set of all the standard numbers and the numbers from the passage, and let $\mathbf {\chi }^{\textit {A}}$ be the set of correct sign assignments to these numbers. Then, we have
<<<Multi-Span Head Training Objective>>>
Denote by ${\chi }^{\textit {MS}}$ the set of correct tag sequences. If the concatenation of a question and a passage is $m$ tokens long, then denote a correct tag sequence as $\left(\text{tag}_1,\ldots ,\text{tag}_m\right)$.
We approximate the likelihood of a tag sequence by assuming independence between the sequence's positions, and multiplying the likelihoods of all the correct tags in the sequence. Then, we have
<<</Multi-Span Head Training Objective>>>
<<<Multi-Span Head Correct Tag Sequences>>>
Since a given multi-span answer is a collection of spans, it is required to obtain its matching tag sequences in order to compute the training objective.
In what we consider to be a correct tag sequence, each answer span will be marked at least once. Due to the weakly supervised setup, we consider all the question/passage spans that match the answer spans as being correct. To illustrate, consider the following simple example. Given the text "X Y Z Z" and the correct multi-span answer ["Y", "Z"], there are three correct tag sequences: $\mathtt {O\,B\,B\,B}$,$\quad $ $\mathtt {O\,B\,B\,O}$,$\quad $ $\mathtt {O\,B\,O\,B}$.
<<</Multi-Span Head Correct Tag Sequences>>>
<<<Dealing with too Many Correct Tag Sequences>>>
The number of correct tag sequences can be expressed by
where $s$ is the number of spans in the answer and $\#_i$ is the number of times the $i^\text{th}$ span appears in the text.
For questions with a reasonable amount of correct tag sequences, we generate all of them before the training starts. However, there is a small group of questions for which the amount of such sequences is between 10,000 and 100,000,000 - too many to generate and train on. In such cases, inspired by BIBREF9, instead of just using an arbitrary subset of the correct sequences, we use beam search to generate the top-k predictions of the training model, and then filter out the incorrect sequences. Compared to using an arbitrary subset, using these sequences causes the optimization to be done with respect to answers more compatible with the model. If no correct tag sequences were predicted within the top-k, we use the tag sequence that has all of the answer spans marked.
<<</Dealing with too Many Correct Tag Sequences>>>
<<</Objective and Training>>>
<<<Tag Sequence Prediction with the Multi-Span Head>>>
Based on the outputs $\textbf {p}_{i}^{{\text{tag}}_{i}}$ we would like to predict the most likely sequence given the $\mathtt {BIO}$ constraints. Denote $\textit {validSeqs}$ as the set of all $\mathtt {BIO}$ sequences of length $m$ that are valid according to the rules specified in Section SECREF5. The $\mathtt {BIO}$ tag sequence to predict is then
We considered the following approaches:
<<<Viterbi Decoding>>>
A natural candidate for getting the most likely sequence is Viterbi decoding, BIBREF10 with transition probabilities learned by a $\mathtt {BIO}$ constrained Conditional Random Field (CRF) BIBREF11. However, further inspection of our sequence's properties reveals that such a computational effort is probably not necessary, as explained in following paragraphs.
<<</Viterbi Decoding>>>
<<<Beam Search>>>
Due to our use of $\mathtt {BIO}$ tags and their constraints, observe that past tag predictions only affect future tag predictions from the last $\mathtt {B}$ prediction and as long as the best tag to predict is $\mathtt {I}$. Considering the frequency and length of the correct spans in the question and the passage, effectively there's no effect of past sequence's positions on future ones, other than a very few positions ahead. Together with the fact that at each prediction step there are no more than 3 tags to consider, it means using beam search to get the most likely sequence is very reasonable and even allows near-optimal results with small beam width values.
<<</Beam Search>>>
<<<Greedy Tagging>>>
Notice that greedy tagging does not enforce the $\mathtt {BIO}$ constraints. However, since the multi-span head's training objective adheres to the $\mathtt {BIO}$ constraints via being given the correct tag sequences, we can expect that even with greedy tagging the predictions will mostly adhere to these constraints as well. In case there are violations, their amendment is required post-prediction. Albeit faster, greedy tagging resulted in a small performance hit, as seen in Table TABREF26.
<<</Greedy Tagging>>>
<<</Tag Sequence Prediction with the Multi-Span Head>>>
<<</Model>>>
<<<Preprocessing>>>
We tokenize the passage, question, and all answer texts using the BERT uncased wordpiece tokenizer from huggingface. The tokenization resulting from each $(x^P,x^Q)$ input pair is truncated at 512 tokens so it can be fed to BERT as an input. However, before tokenizing the dataset texts, we perform additional preprocessing as listed below.
<<<Simple Preprocessing>>>
<<<Improved Textual Parsing>>>
The raw dataset included almost a thousand of HTML entities that did not get parsed properly, e.g " " instead of a simple space. In addition, we fixed some quirks that were introduced by the original Wikipedia parsing method. For example, when encountering a reference to an external source that included a specific page from that reference, the original parser ended up introducing a redundant ":<PAGE NUMBER>" into the parsed text.
<<</Improved Textual Parsing>>>
<<<Improved Handling of Numbers>>>
Although we previously stated that we aren't focusing on improving arithmetic performance, while analyzing the training process we encountered two arithmetic-related issues that could be resolved rather quickly: a precision issue and a number extraction issue. Regarding precision, we noticed that while either generating expressions for the arithmetic head, or using the arithmetic head to predict a numeric answer, the value resulting from an arithmetic operation would not always yield the exact result due to floating point precision limitations. For example, $5.8 + 6.6 = 12.3999...$ instead of $12.4$. This issue has caused a significant performance hit of about 1.5 points for both F1 and EM and was fixed by simply rounding numbers to 5 decimal places, assuming that no answer requires a greater precision. Regarding number extraction, we noticed that some numeric entities, required in order to produce a correct answer, weren't being extracted from the passage. Examples include ordinals (121st, 189th) and some "per-" units (1,580.7/km2, 1050.95/month).
<<</Improved Handling of Numbers>>>
<<</Simple Preprocessing>>>
<<<Using NER for Cleaning Up Multi-Span Questions>>>
The training dataset contains multi-span questions with answers that are clearly incorrect, with examples shown in Table TABREF22. In order to mitigate this, we applied an answer-cleaning technique using a pretrained Named Entity Recognition (NER) model BIBREF12 in the following manner: (1) Pre-define question prefixes whose answer spans are expected to contain only a specific entity type and filter the matching questions. (2) For a given answer of a filtered question, remove any span that does not contain at least one token of the expected type, where the types are determined by applying the NER model on the passage. For example, if a question starts with "who scored", we expect that any valid span will include a person entity ($\mathtt {PER}$). By applying such rules, we discovered that at least 3% of the multi-span questions in the training dataset included incorrect spans. As our analysis of prefixes wasn't exhaustive, we believe that this method could yield further gains. Table TABREF22 shows a few of our cleaning method results, where we perfectly clean the first two questions, and partially clean a third question.
<<</Using NER for Cleaning Up Multi-Span Questions>>>
<<</Preprocessing>>>
<<<Training>>>
The starting point for our implementation was the NABERT+ model, which in turn was based on allenai's NAQANET. Our implementation can be found on GitHub. All three models utilize the allennlp framework. The pretrained BERT models were supplied by huggingface. For our base model we used bert-base-uncased. For our large models we used the standard bert-large-uncased-whole-word-masking and the squad fine-tuned bert-large-uncased- whole-word-masking-finetuned-squad.
Due to limited computational resources, we did not perform any hyperparameter searching. We preferred to focus our efforts on the ablation studies, in hope to gain further insights on the effect of the components that we ourselves introduced. For ease of performance comparison, we followed NABERT+'s training settings: we used the BERT Adam optimizer from huggingface with default settings and a learning rate of $1e^{-5}$. The only difference was that we used a batch size of 12. We trained our base model for 20 epochs. For the large models we used a batch size of 3 with a learning rate of $5e^{-6}$ and trained for 5 epochs, except for the model without the single-span heads that was trained with a batch size of 2 for 7 epochs. F1 was used as our validation metric. All models were trained on a single GPU with 12-16GB of memory.
<<</Training>>>
<<<Results and Discussion>>>
<<<Performance on DROP's Development Set>>>
Table TABREF24 shows the results on DROP's development set. Compared to our base models, our large models exhibit a substantial improvement across all metrics.
<<<Comparison to the NABERT+ Baseline>>>
We can see that our base model surpasses the NABERT+ baseline in every metric. The major improvement in multi-span performance was expected, as our multi-span head was introduced specifically to tackle this type of questions. For the other types, most of the improvement came from better preprocessing. A more detailed discussion could be found in Section (SECREF36).
<<</Comparison to the NABERT+ Baseline>>>
<<<Comparison to MTMSN>>>
Notice that different BERTlarge models were used, so the comparison is less direct. Overall, our large models exhibits similar results to those of MTMSNlarge.
For multi-span questions we achieve a significantly better performance. While a breakdown of metrics was only available for MTMSNlarge, notice that even when comparing these metrics to our base model, we still achieve a 12.2 absolute improvement in EM, and a 2.3 improvement in F1. All that, while keeping in mind we compare a base model to a large model (for reference, note the 8 point improvement between MTMSNbase and MTMSNlarge in both EM and F1). Our best model, large-squad, exhibits a huge improvement of 29.7 in EM and 15.1 in F1 compared to MTMSNlarge.
When comparing single-span performance, our best model exhibits slightly better results, but it should be noted that it retains the single-span heads from NABERT+, while in MTMSN they have one head to predict both single-span and multi-span answers. For a fairer comparison, we trained our model with the single-span heads removed, where our multi-span head remained the only head aimed for handling span questions. With this no-single-span-heads setting, while our multi-span performance even improved a bit, our single-span performance suffered a slight drop, ending up trailing by 0.8 in EM and 0.6 in F1 compared to MTMSN. Therefore, it could prove beneficial to try and analyze the reasons behind each model's (ours and MTMSN) relative advantages, and perhaps try to combine them into a more holistic approach of tackling span questions.
<<</Comparison to MTMSN>>>
<<</Performance on DROP's Development Set>>>
<<<Performance on DROP's Test Set>>>
Table TABREF25 shows the results on DROP's test set, with our model being the best overall as of the time of writing, and not just on multi-span questions.
<<</Performance on DROP's Test Set>>>
<<<Ablation Studies>>>
In order to analyze the effect of each of our changes, we conduct ablation studies on the development set, depicted in Table TABREF26.
Not using the simple preprocessing from Section SECREF17 resulted in a 2.5 point decrease in both EM and F1. The numeric questions were the most affected, with their performance dropping by 3.5 points. Given that number questions make up about 61% of the dataset, we can deduce that our improved number handling is responsible for about a 2.1 point gain, while the rest could be be attributed to the improved Wikipedia parsing.
Although NER span cleaning (Section SECREF23) affected only 3% of the multi-span questions, it provided a solid improvement of 5.4 EM in multi-span questions and 1.5 EM in single-span questions. The single-span improvement is probably due to the combination of better multi-span head learning as a result of fixing multi-span questions and the fact that the multi-span head can answer single-span questions as well.
Not using the single-span heads results in a slight drop in multi-span performance, and a noticeable drop in single-span performance. However when performing the same comparison between our large models (see Table TABREF24), this performance gap becomes significantly smaller.
As expected, not using the multi-span head causes the multi-span performance to plummet. Note that for this ablation test the single-span heads were permitted to train on multi-span questions.
Compared to using greedy decoding in the prediction of multi-span questions, using beam search results in a small improvement. We used a beam with of 5, and didn't perform extensive tuning of the beam width.
<<</Ablation Studies>>>
<<</Results and Discussion>>>
<<<Conclusion>>>
In this work, we introduced a new approach for tackling multi-span questions in reading comprehension datasets. This approach is based on individually tagging each token with a categorical tag, relying on the tokens' contextual representation to bridge the information gap resulting from the tokens being tagged individually.
First, we show that integrating this new approach into an existing model, NABERT+, does not hinder performance on other questions types, while substantially improving the results on multi-span questions. Later, we compare our results to the current state-of-the-art on multi-span questions. We show that our model has a clear advantage in handling multi-span questions, with a 29.7 absolute improvement in EM, and a 15.1 absolute improvement in F1. Furthermore, we show that our model slightly eclipses the current state-of-the-art results on the entire DROP dataeset. Finally, we present some ablation studies, analyzing the benefit gained from individual components of our model.
We believe that combining our tag-based approach for handling multi-span questions with current successful techniques for handling single-span questions could prove beneficial in finding better, more holistic ways, of tackling span questions in general.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nModel\nNABERT+\nHeads Shared with NABERT+\nMulti-Span Head\nObjective and Training\nMulti-Span Head Training Objective\nMulti-Span Head Correct Tag Sequences\nDealing with too Many Correct Tag Sequences\nTag Sequence Prediction with the Multi-Span Head\nViterbi Decoding\nBeam Search\nGreedy Tagging\nPreprocessing\nSimple Preprocessing\nImproved Textual Parsing\nImproved Handling of Numbers\nUsing NER for Cleaning Up Multi-Span Questions\nTraining\nResults and Discussion\nPerformance on DROP's Development Set\nComparison to the NABERT+ Baseline\nComparison to MTMSN\nPerformance on DROP's Test Set\nAblation Studies\nConclusion"
],
"type": "outline"
}
|
1910.11493
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
The SIGMORPHON 2019 Shared Task: Morphological Analysis in Context and Cross-Lingual Transfer for Inflection
<<<Abstract>>>
The SIGMORPHON 2019 shared task on cross-lingual transfer and contextual analysis in morphology examined transfer learning of inflection between 100 language pairs, as well as contextual lemmatization and morphosyntactic description in 66 languages. The first task evolves past years' inflection tasks by examining transfer of morphological inflection knowledge from a high-resource language to a low-resource language. This year also presents a new second challenge on lemmatization and morphological feature analysis in context. All submissions featured a neural component and built on either this year's strong baselines or highly ranked systems from previous years' shared tasks. Every participating team improved in accuracy over the baselines for the inflection task (though not Levenshtein distance), and every team in the contextual analysis task improved on both state-of-the-art neural and non-neural baselines.
<<</Abstract>>>
<<<Introduction>>>
While producing a sentence, humans combine various types of knowledge to produce fluent output—various shades of meaning are expressed through word selection and tone, while the language is made to conform to underlying structural rules via syntax and morphology. Native speakers are often quick to identify disfluency, even if the meaning of a sentence is mostly clear.
Automatic systems must also consider these constraints when constructing or processing language. Strong enough language models can often reconstruct common syntactic structures, but are insufficient to properly model morphology. Many languages implement large inflectional paradigms that mark both function and content words with a varying levels of morphosyntactic information. For instance, Romanian verb forms inflect for person, number, tense, mood, and voice; meanwhile, Archi verbs can take on thousands of forms BIBREF0. Such complex paradigms produce large inventories of words, all of which must be producible by a realistic system, even though a large percentage of them will never be observed over billions of lines of linguistic input. Compounding the issue, good inflectional systems often require large amounts of supervised training data, which is infeasible in many of the world's languages.
This year's shared task is concentrated on encouraging the construction of strong morphological systems that perform two related but different inflectional tasks. The first task asks participants to create morphological inflectors for a large number of under-resourced languages, encouraging systems that use highly-resourced, related languages as a cross-lingual training signal. The second task welcomes submissions that invert this operation in light of contextual information: Given an unannotated sentence, lemmatize each word, and tag them with a morphosyntactic description. Both of these tasks extend upon previous morphological competitions, and the best submitted systems now represent the state of the art in their respective tasks.
<<</Introduction>>>
<<<Tasks and Evaluation>>>
<<<Task 1: Cross-lingual transfer for morphological inflection>>>
Annotated resources for the world's languages are not distributed equally—some languages simply have more as they have more native speakers willing and able to annotate more data. We explore how to transfer knowledge from high-resource languages that are genetically related to low-resource languages.
The first task iterates on last year's main task: morphological inflection BIBREF1. Instead of giving some number of training examples in the language of interest, we provided only a limited number in that language. To accompany it, we provided a larger number of examples in either a related or unrelated language. Each test example asked participants to produce some other inflected form when given a lemma and a bundle of morphosyntactic features as input. The goal, thus, is to perform morphological inflection in the low-resource language, having hopefully exploited some similarity to the high-resource language. Models which perform well here can aid downstream tasks like machine translation in low-resource settings. All datasets were resampled from UniMorph, which makes them distinct from past years.
The mode of the task is inspired by BIBREF2, who fine-tune a model pre-trained on a high-resource language to perform well on a low-resource language. We do not, though, require that models be trained by fine-tuning. Joint modeling or any number of methods may be explored instead.
<<<Example>>>
The model will have access to type-level data in a low-resource target language, plus a high-resource source language. We give an example here of Asturian as the target language with Spanish as the source language.
<<</Example>>>
<<<Evaluation>>>
We score the output of each system in terms of its predictions' exact-match accuracy and the average Levenshtein distance between the predictions and their corresponding true forms.
<<</Evaluation>>>
<<</Task 1: Cross-lingual transfer for morphological inflection>>>
<<<Task 2: Morphological analysis in context>>>
Although inflection of words in a context-agnostic manner is a useful evaluation of the morphological quality of a system, people do not learn morphology in isolation.
In 2018, the second task of the CoNLL–SIGMORPHON Shared Task BIBREF1 required submitting systems to complete an inflectional cloze task BIBREF3 given only the sentential context and the desired lemma – an example of the problem is given in the following lines: A successful system would predict the plural form “dogs”. Likewise, a Spanish word form ayuda may be a feminine noun or a third-person verb form, which must be disambiguated by context.
This year's task extends the second task from last year. Rather than inflect a single word in context, the task is to provide a complete morphological tagging of a sentence: for each word, a successful system will need to lemmatize and tag it with a morphsyntactic description (MSD).
width=
Context is critical—depending on the sentence, identical word forms realize a large number of potential inflectional categories, which will in turn influence lemmatization decisions. If the sentence were instead “The barking dogs kept us up all night”, “barking” is now an adjective, and its lemma is also “barking”.
<<</Task 2: Morphological analysis in context>>>
<<</Tasks and Evaluation>>>
<<<Data>>>
<<<Data for Task 1>>>
<<<Language pairs>>>
We presented data in 100 language pairs spanning 79 unique languages. Data for all but four languages (Basque, Kurmanji, Murrinhpatha, and Sorani) are extracted from English Wiktionary, a large multi-lingual crowd-sourced dictionary with morphological paradigms for many lemmata. 20 of the 100 language pairs are either distantly related or unrelated; this allows speculation into the relative importance of data quantity and linguistic relatedness.
<<</Language pairs>>>
<<<Data format>>>
For each language, the basic data consists of triples of the form (lemma, feature bundle, inflected form), as in tab:sub1data. The first feature in the bundle always specifies the core part of speech (e.g., verb). For each language pair, separate files contain the high- and low-resource training examples.
All features in the bundle are coded according to the UniMorph Schema, a cross-linguistically consistent universal morphological feature set BIBREF8, BIBREF9.
<<</Data format>>>
<<<Extraction from Wiktionary>>>
For each of the Wiktionary languages, Wiktionary provides a number of tables, each of which specifies the full inflectional paradigm for a particular lemma. As in the previous iteration, tables were extracted using a template annotation procedure described in BIBREF10.
<<</Extraction from Wiktionary>>>
<<<Sampling data splits>>>
From each language's collection of paradigms, we sampled the training, development, and test sets as in 2018. Crucially, while the data were sampled in the same fashion, the datasets are distinct from those used for the 2018 shared task.
Our first step was to construct probability distributions over the (lemma, feature bundle, inflected form) triples in our full dataset. For each triple, we counted how many tokens the inflected form has in the February 2017 dump of Wikipedia for that language. To distribute the counts of an observed form over all the triples that have this token as its form, we follow the method used in the previous shared task BIBREF1, training a neural network on unambiguous forms to estimate the distribution over all, even ambiguous, forms. We then sampled 12,000 triples without replacement from this distribution. The first 100 were taken as training data for low-resource settings. The first 10,000 were used as high-resource training sets. As these sets are nested, the highest-count triples tend to appear in the smaller training sets.
The final 2000 triples were randomly shuffled and then split in half to obtain development and test sets of 1000 forms each. The final shuffling was performed to ensure that the development set is similar to the test set. By contrast, the development and test sets tend to contain lower-count triples than the training set.
<<</Sampling data splits>>>
<<<Other modifications>>>
We further adopted some changes to increase compatibility. Namely, we corrected some annotation errors created while scraping Wiktionary for the 2018 task, and we standardized Romanian t-cedilla and t-comma to t-comma. (The same was done with s-cedilla and s-comma.)
<<</Other modifications>>>
<<</Data for Task 1>>>
<<<Data for Task 2>>>
Our data for task 2 come from the Universal Dependencies treebanks BIBREF11, which provides pre-defined training, development, and test splits and annotations in a unified annotation schema for morphosyntax and dependency relationships. Unlike the 2018 cloze task which used UD data, we require no manual data preparation and are able to leverage all 107 monolingual treebanks. As is typical, data are presented in CoNLL-U format, although we modify the morphological feature and lemma fields.
<<<Data conversion>>>
The morphological annotations for the 2019 shared task were converted to the UniMorph schema BIBREF10 according to BIBREF12, who provide a deterministic mapping that increases agreement across languages. This also moves the part of speech into the bundle of morphological features. We do not attempt to individually correct any errors in the UD source material. Further, some languages received additional pre-processing. In the Finnish data, we removed morpheme boundaries that were present in the lemmata (e.g., puhe#kieli $\mapsto $ puhekieli `spoken+language'). Russian lemmata in the GSD treebank were presented in all uppercase; to match the 2018 shared task, we lowercased these. In development and test data, all fields except for form and index within the sentence were struck.
<<</Data conversion>>>
<<</Data for Task 2>>>
<<</Data>>>
<<<Baselines>>>
<<<Task 1 Baseline>>>
We include four neural sequence-to-sequence models mapping lemma into inflected word forms: soft attention BIBREF13, non-monotonic hard attention BIBREF14, monotonic hard attention and a variant with offset-based transition distribution BIBREF15. Neural sequence-to-sequence models with soft attention BIBREF13 have dominated previous SIGMORPHON shared tasks BIBREF16. BIBREF14 instead models the alignment between characters in the lemma and the inflected word form explicitly with hard attention and learns this alignment and transduction jointly. BIBREF15 shows that enforcing strict monotonicity with hard attention is beneficial in tasks such as morphological inflection where the transduction is mostly monotonic. The encoder is a biLSTM while the decoder is a left-to-right LSTM. All models use multiplicative attention and have roughly the same number of parameters. In the model, a morphological tag is fed to the decoder along with target character embeddings to guide the decoding. During the training of the hard attention model, dynamic programming is applied to marginalize all latent alignments exactly.
<<</Task 1 Baseline>>>
<<<Task 2 Baselines>>>
<<<Non-neural>>>
BIBREF17: The Lemming model is a log-linear model that performs joint morphological tagging and lemmatization. The model is globally normalized with the use of a second order linear-chain CRF. To efficiently calculate the partition function, the choice of lemmata are pruned with the use of pre-extracted edit trees.
<<</Non-neural>>>
<<<Neural>>>
BIBREF18: This is a state-of-the-art neural model that also performs joint morphological tagging and lemmatization, but also accounts for the exposure bias with the application of maximum likelihood (MLE). The model stitches the tagger and lemmatizer together with the use of jackknifing BIBREF19 to expose the lemmatizer to the errors made by the tagger model during training. The morphological tagger is based on a character-level biLSTM embedder that produces the embedding for a word, and a word-level biLSTM tagger that predicts a morphological tag sequence for each word in the sentence. The lemmatizer is a neural sequence-to-sequence model BIBREF15 that uses the decoded morphological tag sequence from the tagger as an additional attribute. The model uses hard monotonic attention instead of standard soft attention, along with a dynamic programming based training scheme.
<<</Neural>>>
<<</Task 2 Baselines>>>
<<</Baselines>>>
<<<Results>>>
The SIGMORPHON 2019 shared task received 30 submissions—14 for task 1 and 16 for task 2—from 23 teams. In addition, the organizers' baseline systems were evaluated.
<<<Task 1 Results>>>
Five teams participated in the first Task, with a variety of methods aimed at leveraging the cross-lingual data to improve system performance.
The University of Alberta (UAlberta) performed a focused investigation on four language pairs, training cognate-projection systems from external cognate lists. Two methods were considered: one which trained a high-resource neural encoder-decoder, and projected the test data into the HRL, and one that projected the HRL data into the LRL, and trained a combined system. Results demonstrated that certain language pairs may be amenable to such methods.
The Tuebingen University submission (Tuebingen) aligned source and target to learn a set of edit-actions with both linear and neural classifiers that independently learned to predict action sequences for each morphological category. Adding in the cross-lingual data only led to modest gains.
AX-Semantics combined the low- and high-resource data to train an encoder-decoder seq2seq model; optionally also implementing domain adaptation methods to focus later epochs on the target language.
The CMU submission first attends over a decoupled representation of the desired morphological sequence before using the updated decoder state to attend over the character sequence of the lemma. Secondly, in order to reduce the bias of the decoder's language model, they hallucinate two types of data that encourage common affixes and character copying. Simply allowing the model to learn to copy characters for several epochs significantly out-performs the task baseline, while further improvements are obtained through fine-tuning. Making use of an adversarial language discriminator, cross lingual gains are highly-correlated to linguistic similarity, while augmenting the data with hallucinated forms and multiple related target language further improves the model.
The system from IT-IST also attends separately to tags and lemmas, using a gating mechanism to interpolate the importance of the individual attentions. By combining the gated dual-head attention with a SparseMax activation function, they are able to jointly learn stem and affix modifications, improving significantly over the baseline system.
The relative system performance is described in tab:sub2team, which shows the average per-language accuracy of each system. The table reflects the fact that some teams submitted more than one system (e.g. Tuebingen-1 & Tuebingen-2 in the table).
<<</Task 1 Results>>>
<<<Task 2 Results>>>
Nine teams submitted system papers for Task 2, with several interesting modifications to either the baseline or other prior work that led to modest improvements.
Charles-Saarland achieved the highest overall tagging accuracy by leveraging multi-lingual BERT embeddings fine-tuned on a concatenation of all available languages, effectively transporting the cross-lingual objective of Task 1 into Task 2. Lemmas and tags are decoded separately (with a joint encoder and separate attention); Lemmas are a sequence of edit-actions, while tags are calculated jointly. (There is no splitting of tags into features; tags are atomic.)
CBNU instead lemmatize using a transformer network, while performing tagging with a multilayer perceptron with biaffine attention. Input words are first lemmatized, and then pipelined to the tagger, which produces atomic tag sequences (i.e., no splitting of features).
The team from Istanbul Technical University (ITU) jointly produces lemmatic edit-actions and morphological tags via a two level encoder (first word embeddings, and then context embeddings) and separate decoders. Their system slightly improves over the baseline lemmatization, but significantly improves tagging accuracy.
The team from the University of Groningen (RUG) also uses separate decoders for lemmatization and tagging, but uses ELMo to initialize the contextual embeddings, leading to large gains in performance. Furthermore, joint training on related languages further improves results.
CMU approaches tagging differently than the multi-task decoding we've seen so far (baseline is used for lemmatization). Making use of a hierarchical CRF that first predicts POS (that is subsequently looped back into the encoder), they then seek to predict each feature separately. In particular, predicting POS separately greatly improves results. An attempt to leverage gold typological information led to little gain in the results; experiments suggest that the system is already learning the pertinent information.
The team from Ohio State University (OHIOSTATE) concentrates on predicting tags; the baseline lemmatizer is used for lemmatization. To that end, they make use of a dual decoder that first predicts features given only the word embedding as input; the predictions are fed to a GRU seq2seq, which then predicts the sequence of tags.
The UNT HiLT+Ling team investigates a low-resource setting of the tagging, by using parallel Bible data to learn a translation matrix between English and the target language, learning morphological tags through analogy with English.
The UFAL-Prague team extends their submission from the UD shared task (multi-layer LSTM), replacing the pretrained embeddings with BERT, to great success (first in lemmatization, 2nd in tagging). Although they predict complete tags, they use the individual features to regularize the decoder. Small gains are also obtained from joining multi-lingual corpora and ensembling.
CUNI–Malta performs lemmatization as operations over edit actions with LSTM and ReLU. Tagging is a bidirectional LSTM augmented by the edit actions (i.e., two-stage decoding), predicting features separately.
The Edinburgh system is a character-based LSTM encoder-decoder with attention, implemented in OpenNMT. It can be seen as an extension of the contextual lemmatization system Lematus BIBREF20 to include morphological tagging, or alternatively as an adaptation of the morphological re-inflection system MED BIBREF21 to incorporate context and perform analysis rather than re-inflection. Like these systems it uses a completely generic encoder-decoder architecture with no specific adaptation to the morphological processing task other than the form of the input. In the submitted version of the system, the input is split into short chunks corresponding to the target word plus one word of context on either side, and the system is trained to output the corresponding lemmas and tags for each three-word chunk.
Several teams relied on external resources to improve their lemmatization and feature analysis. Several teams made use of pre-trained embeddings. CHARLES-SAARLAND-2 and UFALPRAGUE-1 used pretrained contextual embeddings (BERT) provided by Google BIBREF22. CBNU-1 used a mix of pre-trained embeddings from the CoNLL 2017 shared task and fastText. Further, some teams trained their own embeddings to aid performance.
<<</Task 2 Results>>>
<<</Results>>>
<<<Future Directions>>>
In general, the application of typology to natural language processing BIBREF23, BIBREF24 provides an interesting avenue for multilinguality. Further, our shared task was designed to only leverage a single helper language, though many may exist with lexical or morphological overlap with the target language. Techniques like those of BIBREF25 may aid in designing universal inflection architectures. Neither task this year included unannotated monolingual corpora. Using such data is well-motivated from an L1-learning point of view, and may affect the performance of low-resource data settings.
In the case of inflection an interesting future topic could involve departing from orthographic representation and using more IPA-like representations, i.e. transductions over pronunciations. Different languages, in particular those with idiosyncratic orthographies, may offer new challenges in this respect.
Only one team tried to learn inflection in a multilingual setting—i.e. to use all training data to train one model. Such transfer learning is an interesting avenue of future research, but evaluation could be difficult. Whether any cross-language transfer is actually being learned vs. whether having more data better biases the networks to copy strings is an evaluation step to disentangle.
Creating new data sets that accurately reflect learner exposure (whether L1 or L2) is also an important consideration in the design of future shared tasks. One pertinent facet of this is information about inflectional categories—often the inflectional information is insufficiently prescribed by the lemma, as with the Romanian verbal inflection classes or nominal gender in German.
As we move toward multilingual models for morphology, it becomes important to understand which representations are critical or irrelevant for adapting to new languages; this may be probed in the style of BIBREF27, and it can be used as a first step toward designing systems that avoid catastrophic forgetting as they learn to inflect new languages BIBREF28.
Future directions for Task 2 include exploring cross-lingual analysis—in stride with both Task 1 and BIBREF29—and leveraging these analyses in downstream tasks.
<<</Future Directions>>>
<<<Conclusions>>>
The SIGMORPHON 2019 shared task provided a type-level evaluation on 100 language pairs in 79 languages and a token-level evaluation on 107 treebanks in 66 languages, of systems for inflection and analysis. On task 1 (low-resource inflection with cross-lingual transfer), 14 systems were submitted, while on task 2 (lemmatization and morphological feature analysis), 16 systems were submitted. All used neural network models, completing a trend in past years' shared tasks and other recent work on morphology.
In task 1, gains from cross-lingual training were generally modest, with gains positively correlating with the linguistic similarity of the two languages.
In the second task, several methods were implemented by multiple groups, with the most successful systems implementing variations of multi-headed attention, multi-level encoding, multiple decoders, and ELMo and BERT contextual embeddings.
We have released the training, development, and test sets, and expect these datasets to provide a useful benchmark for future research into learning of inflectional morphology and string-to-string transduction.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nTasks and Evaluation\nTask 1: Cross-lingual transfer for morphological inflection\nExample\nEvaluation\nTask 2: Morphological analysis in context\nData\nData for Task 1\nLanguage pairs\nData format\nExtraction from Wiktionary\nSampling data splits\nOther modifications\nData for Task 2\nData conversion\nBaselines\nTask 1 Baseline\nTask 2 Baselines\nNon-neural\nNeural\nResults\nTask 1 Results\nTask 2 Results\nFuture Directions\nConclusions"
],
"type": "outline"
}
|
1910.00912
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Hierarchical Multi-Task Natural Language Understanding for Cross-domain Conversational AI: HERMIT NLU
<<<Abstract>>>
We present a new neural architecture for wide-coverage Natural Language Understanding in Spoken Dialogue Systems. We develop a hierarchical multi-task architecture, which delivers a multi-layer representation of sentence meaning (i.e., Dialogue Acts and Frame-like structures). The architecture is a hierarchy of self-attention mechanisms and BiLSTM encoders followed by CRF tagging layers. We describe a variety of experiments, showing that our approach obtains promising results on a dataset annotated with Dialogue Acts and Frame Semantics. Moreover, we demonstrate its applicability to a different, publicly available NLU dataset annotated with domain-specific intents and corresponding semantic roles, providing overall performance higher than state-of-the-art tools such as RASA, Dialogflow, LUIS, and Watson. For example, we show an average 4.45% improvement in entity tagging F-score over Rasa, Dialogflow and LUIS.
<<</Abstract>>>
<<<Introduction>>>
Research in Conversational AI (also known as Spoken Dialogue Systems) has applications ranging from home devices to robotics, and has a growing presence in industry. A key problem in real-world Dialogue Systems is Natural Language Understanding (NLU) – the process of extracting structured representations of meaning from user utterances. In fact, the effective extraction of semantics is an essential feature, being the entry point of any Natural Language interaction system. Apart from challenges given by the inherent complexity and ambiguity of human language, other challenges arise whenever the NLU has to operate over multiple domains. In fact, interaction patterns, domain, and language vary depending on the device the user is interacting with. For example, chit-chatting and instruction-giving for executing an action are different processes in terms of language, domain, syntax and interaction schemes involved. And what if the user combines two interaction domains: “play some music, but first what's the weather tomorrow”?
In this work, we present HERMIT, a HiERarchical MultI-Task Natural Language Understanding architecture, designed for effective semantic parsing of domain-independent user utterances, extracting meaning representations in terms of high-level intents and frame-like semantic structures. With respect to previous approaches to NLU for SDS, HERMIT stands out for being a cross-domain, multi-task architecture, capable of recognising multiple intents/frames in an utterance. HERMIT also shows better performance with respect to current state-of-the-art commercial systems. Such a novel combination of requirements is discussed below.
<<<Cross-domain NLU>>>
A cross-domain dialogue agent must be able to handle heterogeneous types of conversation, such as chit-chatting, giving directions, entertaining, and triggering domain/task actions. A domain-independent and rich meaning representation is thus required to properly capture the intent of the user. Meaning is modelled here through three layers of knowledge: dialogue acts, frames, and frame arguments. Frames and arguments can be in turn mapped to domain-dependent intents and slots, or to Frame Semantics' BIBREF0 structures (i.e. semantic frames and frame elements, respectively), which allow handling of heterogeneous domains and language.
<<</Cross-domain NLU>>>
<<<Multi-task NLU>>>
Deriving such a multi-layered meaning representation can be approached through a multi-task learning approach. Multi-task learning has found success in several NLP problems BIBREF1, BIBREF2, especially with the recent rise of Deep Learning. Thanks to the possibility of building complex networks, handling more tasks at once has been proven to be a successful solution, provided that some degree of dependence holds between the tasks. Moreover, multi-task learning allows the use of different datasets to train sub-parts of the network BIBREF3. Following the same trend, HERMIT is a hierarchical multi-task neural architecture which is able to deal with the three tasks of tagging dialogue acts, frame-like structures, and their arguments in parallel. The network, based on self-attention mechanisms, seq2seq bi-directional Long-Short Term Memory (BiLSTM) encoders, and CRF tagging layers, is hierarchical in the sense that information output from earlier layers flows through the network, feeding following layers to solve downstream dependent tasks.
<<</Multi-task NLU>>>
<<<Multi-dialogue act and -intent NLU>>>
Another degree of complexity in NLU is represented by the granularity of knowledge that can be extracted from an utterance. Utterance semantics is often rich and expressive: approximating meaning to a single user intent is often not enough to convey the required information. As opposed to the traditional single-dialogue act and single-intent view in previous work BIBREF4, BIBREF5, BIBREF6, HERMIT operates on a meaning representation that is multi-dialogue act and multi-intent. In fact, it is possible to model an utterance's meaning through multiple dialogue acts and intents at the same time. For example, the user would be able both to request tomorrow's weather and listen to his/her favourite music with just a single utterance.
A further requirement is that for practical application the system should be competitive with state-of-the-art: we evaluate HERMIT's effectiveness by running several empirical investigations. We perform a robust test on a publicly available NLU-Benchmark (NLU-BM) BIBREF7 containing 25K cross-domain utterances with a conversational agent. The results obtained show a performance higher than well-known off-the-shelf tools (i.e., Rasa, DialogueFlow, LUIS, and Watson). The contribution of the different network components is then highlighted through an ablation study. We also test HERMIT on the smaller Robotics-Oriented MUltitask Language UnderStanding (ROMULUS) corpus, annotated with Dialogue Acts and Frame Semantics. HERMIT produces promising results for the application in a real scenario.
<<</Multi-dialogue act and -intent NLU>>>
<<</Introduction>>>
<<<Related Work>>>
Much research on Natural (or Spoken, depending on the input) Language Understanding has been carried out in the area of Spoken Dialogue Systems BIBREF8, where the advent of statistical learning has led to the application of many data-driven approaches BIBREF9. In recent years, the rise of deep learning models has further improved the state-of-the-art. Recurrent Neural Networks (RNNs) have proven to be particularly successful, especially uni- and bi-directional LSTMs and Gated Recurrent Units (GRUs). The use of such deep architectures has also fostered the development of joint classification models of intents and slots. Bi-directional GRUs are applied in BIBREF10, where the hidden state of each time step is used for slot tagging in a seq2seq fashion, while the final state of the GRU is used for intent classification. The application of attention mechanisms in a BiLSTM architecture is investigated in BIBREF5, while the work of BIBREF11 explores the use of memory networks BIBREF12 to exploit encoding of historical user utterances to improve the slot-filling task. Seq2seq with self-attention is applied in BIBREF13, where the classified intent is also used to guide a special gated unit that contributes to the slot classification of each token.
One of the first attempts to jointly detect domains in addition to intent-slot tagging is the work of BIBREF4. An utterance syntax is encoded through a Recursive NN, and it is used to predict the joined domain-intent classes. Syntactic features extracted from the same network are used in the per-word slot classifier. The work of BIBREF6 applies the same idea of BIBREF10, this time using a context-augmented BiLSTM, and performing domain-intent classification as a single joint task. As in BIBREF11, the history of user utterances is also considered in BIBREF14, in combination with a dialogue context encoder. A two-layer hierarchical structure made of a combination of BiLSTM and BiGRU is used for joint classification of domains and intents, together with slot tagging. BIBREF15 apply multi-task learning to the dialogue domain. Dialogue state tracking, dialogue act and intent classification, and slot tagging are jointly learned. Dialogue states and user utterances are encoded to provide hidden representations, which jointly affect all the other tasks.
Many previous systems are trained and compared over the ATIS (Airline Travel Information Systems) dataset BIBREF16, which covers only the flight-booking domain. Some of them also use bigger, not publicly available datasets, which appear to be similar to the NLU-BM in terms of number of intents and slots, but they cover no more than three or four domains. Our work stands out for its more challenging NLU setting, since we are dealing with a higher number of domains/scenarios (18), intents (64) and slots (54) in the NLU-BM dataset, and dialogue acts (11), frames (58) and frame elements (84) in the ROMULUS dataset. Moreover, we propose a multi-task hierarchical architecture, where each layer is trained to solve one of the three tasks. Each of these is tackled with a seq2seq classification using a CRF output layer, as in BIBREF3.
The NLU problem has been studied also on the Interactive Robotics front, mostly to support basic dialogue systems, with few dialogue states and tailored for specific tasks, such as semantic mapping BIBREF17, navigation BIBREF18, BIBREF19, or grounded language learning BIBREF20. However, the designed approaches, either based on formal languages or data-driven, have never been shown to scale to real world scenarios. The work of BIBREF21 makes a step forward in this direction. Their model still deals with the single `pick and place' domain, covering no more than two intents, but it is trained on several thousands of examples, making it able to manage more unstructured language. An attempt to manage a higher number of intents, as well as more variable language, is represented by the work of BIBREF22 where the sole Frame Semantics is applied to represent user intents, with no Dialogue Acts.
<<</Related Work>>>
<<<Jointly parsing dialogue acts and frame-like structures>>>
The identification of Dialogue Acts (henceforth DAs) is required to drive the dialogue manager to the next dialogue state. General frame structures (FRs) provide a reference framework to capture user intents, in terms of required or desired actions that a conversational agent has to perform. Depending on the level of abstraction required by an application, these can be interpreted as more domain-dependent paradigms like intent, or to shallower representations, such as semantic frames, as conceived in FrameNet BIBREF23. From this perspective, semantic frames represent a versatile abstraction that can be mapped over an agent's capabilities, allowing also the system to be easily extended with new functionalities without requiring the definition of new ad-hoc structures. Similarly, frame arguments (ARs) act as slots in a traditional intent-slots scheme, or to frame elements for semantic frames.
In our work, the whole process of extracting a complete semantic interpretation as required by the system is tackled with a multi-task learning approach across DAs, FRs, and ARs. Each of these tasks is modelled as a seq2seq problem, where a task-specific label is assigned to each token of the sentence according to the IOB2 notation BIBREF24, with “B-” marking the Beginning of the chunk, “I-” the tokens Inside the chunk while “O-” is assigned to any token that does not belong to any chunk. Task labels are drawn from the set of classes defined for DAs, FRs, and ARs. Figure TABREF5 shows an example of the tagging layers over the sentence Where can I find Starbucks?, where Frame Semantics has been selected as underlying reference theory.
<<<Architecture description>>>
The central motivation behind the proposed architecture is that there is a dependence among the three tasks of identifying DAs, FRs, and ARs. The relationship between tagging frame and arguments appears more evident, as also developed in theories like Frame Semantics – although it is defined independently by each theory. However, some degree of dependence also holds between the DAs and FRs. For example, the FrameNet semantic frame Desiring, expressing a desire of the user for an event to occur, is more likely to be used in the context of an Inform DA, which indicates the state of notifying the agent with an information, other than in an Instruction. This is clearly visible in interactions like “I'd like a cup of hot chocolate” or “I'd like to find a shoe shop”, where the user is actually notifying the agent about a desire of hers/his.
In order to reflect such inter-task dependence, the classification process is tackled here through a hierarchical multi-task learning approach. We designed a multi-layer neural network, whose architecture is shown in Figure FIGREF7, where each layer is trained to solve one of the three tasks, namely labelling dialogue acts ($DA$ layer), semantic frames ($FR$ layer), and frame elements ($AR$ layer). The layers are arranged in a hierarchical structure that allows the information produced by earlier layers to be fed to downstream tasks.
The network is mainly composed of three BiLSTM BIBREF25 encoding layers. A sequence of input words is initially converted into an embedded representation through an ELMo embeddings layer BIBREF26, and is fed to the $DA$ layer. The embedded representation is also passed over through shortcut connections BIBREF1, and concatenated with both the outputs of the $DA$ and $FR$ layers. Self-attention layers BIBREF27 are placed after the $DA$ and $FR$ BiLSTM encoders. Where $w_t$ is the input word at time step $t$ of the sentence $\textbf {\textrm {w}} = (w_1, ..., w_T)$, the architecture can be formalised by:
where $\oplus $ represents the vector concatenation operator, $e_t$ is the embedding of the word at time $t$, and $\textbf {\textrm {s}}^{L}$ = ($s_1^L$, ..., $s_T^L$) is the embedded sequence output of each $L$ layer, with $L = \lbrace DA, FR, AR\rbrace $. Given an input sentence, the final sequence of labels $\textbf {y}^L$ for each task is computed through a CRF tagging layer, which operates on the output of the $DA$ and $FR$ self-attention, and of the $AR$ BiLSTM embedding, so that:
where a$^{DA}$, a$^{FR}$ are attended embedded sequences. Due to shortcut connections, layers in the upper levels of the architecture can rely both on direct word embeddings as well as the hidden representation $a_t^L$ computed by a previous layer. Operationally, the latter carries task specific information which, combined with the input embeddings, helps in stabilising the classification of each CRF layer, as shown by our experiments. The network is trained by minimising the sum of the individual negative log-likelihoods of the three CRF layers, while at test time the most likely sequence is obtained through the Viterbi decoding over the output scores of the CRF layer.
<<</Architecture description>>>
<<</Jointly parsing dialogue acts and frame-like structures>>>
<<<Experimental Evaluation>>>
In order to assess the effectiveness of the proposed architecture and compare against existing off-the-shelf tools, we run several empirical evaluations.
<<<Datasets>>>
We tested the system on two datasets, different in size and complexity of the addressed language.
<<<NLU-Benchmark dataset>>>
The first (publicly available) dataset, NLU-Benchmark (NLU-BM), contains $25,716$ utterances annotated with targeted Scenario, Action, and involved Entities. For example, “schedule a call with Lisa on Monday morning” is labelled to contain a calendar scenario, where the set_event action is instantiated through the entities [event_name: a call with Lisa] and [date: Monday morning]. The Intent is then obtained by concatenating scenario and action labels (e.g., calendar_set_event). This dataset consists of multiple home assistant task domains (e.g., scheduling, playing music), chit-chat, and commands to a robot BIBREF7.
<<</NLU-Benchmark dataset>>>
<<<ROMULUS dataset>>>
The second dataset, ROMULUS, is composed of $1,431$ sentences, for each of which dialogue acts, semantic frames, and corresponding frame elements are provided. This dataset is being developed for modelling user utterances to open-domain conversational systems for robotic platforms that are expected to handle different interaction situations/patterns – e.g., chit-chat, command interpretation. The corpus is composed of different subsections, addressing heterogeneous linguistic phenomena, ranging from imperative instructions (e.g., “enter the bedroom slowly, turn left and turn the lights off ”) to complex requests for information (e.g., “good morning I want to buy a new mobile phone is there any shop nearby?”) or open-domain chit-chat (e.g., “nope thanks let's talk about cinema”). A considerable number of utterances in the dataset is collected through Human-Human Interaction studies in robotic domain ($\approx $$70\%$), though a small portion has been synthetically generated for balancing the frame distribution.
Note that while the NLU-BM is designed to have at most one intent per utterance, sentences are here tagged following the IOB2 sequence labelling scheme (see example of Figure TABREF5), so that multiple dialogue acts, frames, and frame elements can be defined at the same time for the same utterance. For example, three dialogue acts are identified within the sentence [good morning]$_{\textsc {Opening}}$ [I want to buy a new mobile phone]$_{\textsc {Inform}}$ [is there any shop nearby?]$_{\textsc {Req\_info}}$. As a result, though smaller, the ROMULUS dataset provides a richer representation of the sentence's semantics, making the tasks more complex and challenging. These observations are highlighted by the statistics in Table TABREF13, that show an average number of dialogue acts, frames and frame elements always greater than 1 (i.e., $1.33$, $1.41$ and $3.54$, respectively).
<<</ROMULUS dataset>>>
<<</Datasets>>>
<<<Experimental setup>>>
All the models are implemented with Keras BIBREF28 and Tensorflow BIBREF29 as backend, and run on a Titan Xp. Experiments are performed in a 10-fold setting, using one fold for tuning and one for testing. However, since HERMIT is designed to operate on dialogue acts, semantic frames and frame elements, the best hyperparameters are obtained over the ROMULUS dataset via a grid search using early stopping, and are applied also to the NLU-BM models. This guarantees fairness towards other systems, that do not perform any fine-tuning on the training data. We make use of pre-trained 1024-dim ELMo embeddings BIBREF26 as word vector representations without re-training the weights.
<<</Experimental setup>>>
<<<Experiments on the NLU-Benchmark>>>
This section shows the results obtained on the NLU-Benchmark (NLU-BM) dataset provided by BIBREF7, by comparing HERMIT to off-the-shelf NLU services, namely: Rasa, Dialogflow, LUIS and Watson. In order to apply HERMIT to NLU-BM annotations, these have been aligned so that Scenarios are treated as DAs, Actions as FRs and Entities as ARs.
To make our model comparable against other approaches, we reproduced the same folds as in BIBREF7, where a resized version of the original dataset is used. Table TABREF11 shows some statistics of the NLU-BM and its reduced version. Moreover, micro-averaged Precision, Recall and F1 are computed following the original paper to assure consistency. TP, FP and FN of intent labels are obtained as in any other multi-class task. An entity is instead counted as TP if there is an overlap between the predicted and the gold span, and their labels match.
Experimental results are reported in Table TABREF21. The statistical significance is evaluated through the Wilcoxon signed-rank test. When looking at the intent F1, HERMIT performs significantly better than Rasa $[Z=-2.701, p = .007]$ and LUIS $[Z=-2.807, p = .005]$. On the contrary, the improvements w.r.t. Dialogflow $[Z=-1.173, p = .241]$ do not seem to be significant. This is probably due to the high variance obtained by Dialogflow across the 10 folds. Watson is by a significant margin the most accurate system in recognising intents $[Z=-2.191, p = .028]$, especially due to its Precision score.
The hierarchical multi-task architecture of HERMIT seems to contribute strongly to entity tagging accuracy. In fact, in this task it performs significantly better than Rasa $[Z=-2.803, p = .005]$, Dialogflow $[Z=-2.803, p = .005]$, LUIS $[Z=-2.803, p = .005]$ and Watson $[Z=-2.805, p = .005]$, with improvements from $7.08$ to $35.92$ of F1.
Following BIBREF7, we then evaluated a metric that combines intent and entities, computed by simply summing up the two confusion matrices (Table TABREF23). Results highlight the contribution of the entity tagging task, where HERMIT outperforms the other approaches. Paired-samples t-tests were conducted to compare the HERMIT combined F1 against the other systems. The statistical analysis shows a significant improvement over Rasa $[Z=-2.803, p = .005]$, Dialogflow $[Z=-2.803, p = .005]$, LUIS $[Z=-2.803, p = .005]$ and Watson $[Z=-2.803, p = .005]$.
<<<Ablation study>>>
In order to assess the contributions of the HERMIT's components, we performed an ablation study. The results are obtained on the NLU-BM, following the same setup as in Section SECREF16.
Results are shown in Table TABREF25. The first row refers to the complete architecture, while –SA shows the results of HERMIT without the self-attention mechanism. Then, from this latter we further remove shortcut connections (– SA/CN) and CRF taggers (– SA/CRF). The last row (– SA/CN/CRF) shows the results of a simple architecture, without self-attention, shortcuts, and CRF. Though not significant, the contribution of the several architectural components can be observed. The contribution of self-attention is distributed across all the tasks, with a small inclination towards the upstream ones. This means that while the entity tagging task is mostly lexicon independent, it is easier to identify pivoting keywords for predicting the intent, e.g. the verb “schedule” triggering the calendar_set_event intent. The impact of shortcut connections is more evident on entity tagging. In fact, the effect provided by shortcut connections is that the information flowing throughout the hierarchical architecture allows higher layers to encode richer representations (i.e., original word embeddings + latent semantics from the previous task). Conversely, the presence of the CRF tagger affects mainly the lower levels of the hierarchical architecture. This is not probably due to their position in the hierarchy, but to the way the tasks have been designed. In fact, while the span of an entity is expected to cover few tokens, in intent recognition (i.e., a combination of Scenario and Action recognition) the span always covers all the tokens of an utterance. CRF therefore preserves consistency of IOB2 sequences structure. However, HERMIT seems to be the most stable architecture, both in terms of standard deviation and task performance, with a good balance between intent and entity recognition.
<<</Ablation study>>>
<<</Experiments on the NLU-Benchmark>>>
<<<Experiments on the ROMULUS dataset>>>
In this section we report the experiments performed on the ROMULUS dataset (Table TABREF27). Together with the evaluation metrics used in BIBREF7, we report the span F1, computed using the CoNLL-2000 shared task evaluation script, and the Exact Match (EM) accuracy of the entire sequence of labels. It is worth noticing that the EM Combined score is computed as the conjunction of the three individual predictions – e.g., a match is when all the three sequences are correct.
Results in terms of EM reflect the complexity of the different tasks, motivating their position within the hierarchy. Specifically, dialogue act identification is the easiest task ($89.31\%$) with respect to frame ($82.60\%$) and frame element ($79.73\%$), due to the shallow semantics it aims to catch. However, when looking at the span F1, its score ($89.42\%$) is lower than the frame element identification task ($92.26\%$). What happens is that even though the label set is smaller, dialogue act spans are supposed to be longer than frame element ones, sometimes covering the whole sentence. Frame elements, instead, are often one or two tokens long, that contribute in increasing span based metrics. Frame identification is the most complex task for several reasons. First, lots of frame spans are interlaced or even nested; this contributes to increasing the network entropy. Second, while the dialogue act label is highly related to syntactic structures, frame identification is often subject to the inherent ambiguity of language (e.g., get can evoke both Commerce_buy and Arriving). We also report the metrics in BIBREF7 for consistency. For dialogue act and frame tasks, scores provide just the extent to which the network is able to detect those labels. In fact, the metrics do not consider any span information, essential to solve and evaluate our tasks. However, the frame element scores are comparable to the benchmark, since the task is very similar.
Overall, getting back to the combined EM accuracy, HERMIT seems to be promising, with the network being able to reproduce all the three gold sequences for almost $70\%$ of the cases. The importance of this result provides an idea of the architecture behaviour over the entire pipeline.
<<</Experiments on the ROMULUS dataset>>>
<<<Discussion>>>
The experimental evaluation reported in this section provides different insights. The proposed architecture addresses the problem of NLU in wide-coverage conversational systems, modelling semantics through multiple Dialogue Acts and Frame-like structures in an end-to-end fashion. In addition, its hierarchical structure, which reflects the complexity of the single tasks, allows providing rich representations across the whole network. In this respect, we can affirm that the architecture successfully tackles the multi-task problem, with results that are promising in terms of usability and applicability of the system in real scenarios.
However, a thorough evaluation in the wild must be carried out, to assess to what extent the system is able to handle complex spoken language phenomena, such as repetitions, disfluencies, etc. To this end, a real scenario evaluation may open new research directions, by addressing new tasks to be included in the multi-task architecture. This is supported by the scalable nature of the proposed approach. Moreover, following BIBREF3, corpora providing different annotations can be exploited within the same multi-task network.
We also empirically showed how the same architectural design could be applied to a dataset addressing similar problems. In fact, a comparison with off-the-shelf tools shows the benefits provided by the hierarchical structure, with better overall performance better than any current solution. An ablation study has been performed, assessing the contribution provided by the different components of the network. The results show how the shortcut connections help in the more fine-grained tasks, successfully encoding richer representations. CRFs help when longer spans are being predicted, more present in the upstream tasks.
Finally, the seq2seq design allowed obtaining a multi-label approach, enabling the identification of multiple spans in the same utterance that might evoke different dialogue acts/frames. This represents a novelty for NLU in conversational systems, as such a problem has always been tackled as a single-intent detection. However, the seq2seq approach carries also some limitations, especially on the Frame Semantics side. In fact, label sequences are linear structures, not suitable for representing nested predicates, a tough and common problem in Natural Language. For example, in the sentence “I want to buy a new mobile phone”, the [to buy a new mobile phone] span represents both the Desired_event frame element of the Desiring frame and a Commerce_buy frame at the same time. At the moment of writing, we are working on modeling nested predicates through the application of bilinear models.
<<</Discussion>>>
<<</Experimental Evaluation>>>
<<<Future Work>>>
We have started integrating a corpus of 5M sentences of real users chit-chatting with our conversational agent, though at the time of writing they represent only $16\%$ of the current dataset.
As already pointed out in Section SECREF28, there are some limitations in the current approach that need to be addressed. First, we have to assess the network's capability in handling typical phenomena of spontaneous spoken language input, such as repetitions and disfluencies BIBREF30. This may open new research directions, by including new tasks to identify/remove any kind of noise from the spoken input. Second, the seq2seq scheme does not deal with nested predicates, a common aspect of Natural Language. To the best of our knowledge, there is no architecture that implements an end-to-end network for FrameNet based semantic parsing. Following previous work BIBREF2, one of our future goals is to tackle such problems through hierarchical multi-task architectures that rely on bilinear models.
<<</Future Work>>>
<<<Conclusion>>>
In this paper we presented HERMIT NLU, a hierarchical multi-task architecture for semantic parsing sentences for cross-domain spoken dialogue systems. The problem is addressed using a seq2seq model employing BiLSTM encoders and self-attention mechanisms and followed by CRF tagging layers. We evaluated HERMIT on a 25K sentences NLU-Benchmark and out-perform state-of-the-art NLU tools such as Rasa, Dialogflow, LUIS and Watson, even without specific fine-tuning of the model.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nCross-domain NLU\nMulti-task NLU\nMulti-dialogue act and -intent NLU\nRelated Work\nJointly parsing dialogue acts and frame-like structures\nArchitecture description\nExperimental Evaluation\nDatasets\nNLU-Benchmark dataset\nROMULUS dataset\nExperimental setup\nExperiments on the NLU-Benchmark\nAblation study\nExperiments on the ROMULUS dataset\nDiscussion\nFuture Work\nConclusion"
],
"type": "outline"
}
|
1908.10449
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Interactive Machine Comprehension with Information Seeking Agents
<<<Abstract>>>
Existing machine reading comprehension (MRC) models do not scale effectively to real-world applications like web-level information retrieval and question answering (QA). We argue that this stems from the nature of MRC datasets: most of these are static environments wherein the supporting documents and all necessary information are fully observed. In this paper, we propose a simple method that reframes existing MRC datasets as interactive, partially observable environments. Specifically, we "occlude" the majority of a document's text and add context-sensitive commands that reveal "glimpses" of the hidden text to a model. We repurpose SQuAD and NewsQA as an initial case study, and then show how the interactive corpora can be used to train a model that seeks relevant information through sequential decision making. We believe that this setting can contribute in scaling models to web-level QA scenarios.
<<</Abstract>>>
<<<Introduction>>>
Many machine reading comprehension (MRC) datasets have been released in recent years BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 to benchmark a system's ability to understand and reason over natural language. Typically, these datasets require an MRC model to read through a document to answer a question about information contained therein.
The supporting document is, more often than not, static and fully observable. This raises concerns, since models may find answers simply through shallow pattern matching; e.g., syntactic similarity between the words in questions and documents. As pointed out by BIBREF5, for questions starting with when, models tend to predict the only date/time answer in the supporting document. Such behavior limits the generality and usefulness of MRC models, and suggests that they do not learn a proper `understanding' of the intended task. In this paper, to address this problem, we shift the focus of MRC data away from `spoon-feeding' models with sufficient information in fully observable, static documents. Instead, we propose interactive versions of existing MRC tasks, whereby the information needed to answer a question must be gathered sequentially.
The key idea behind our proposed interactive MRC (iMRC) is to restrict the document context that a model observes at one time. Concretely, we split a supporting document into its component sentences and withhold these sentences from the model. Given a question, the model must issue commands to observe sentences in the withheld set; we equip models with actions such as Ctrl+F (search for token) and stop for searching through partially observed documents. A model searches iteratively, conditioning each command on the input question and the sentences it has observed previously. Thus, our task requires models to `feed themselves' rather than spoon-feeding them with information. This casts MRC as a sequential decision-making problem amenable to reinforcement learning (RL).
As an initial case study, we repurpose two well known, related corpora with different difficulty levels for our interactive MRC task: SQuAD and NewsQA. Table TABREF2 shows some examples of a model performing interactive MRC on these datasets. Naturally, our reframing makes the MRC problem harder; however, we believe the added demands of iMRC more closely match web-level QA and may lead to deeper comprehension of documents' content.
The main contributions of this work are as follows:
We describe a method to make MRC datasets interactive and formulate the new task as an RL problem.
We develop a baseline agent that combines a top performing MRC model and a state-of-the-art RL optimization algorithm and test it on our iMRC tasks.
We conduct experiments on several variants of iMRC and discuss the significant challenges posed by our setting.
<<</Introduction>>>
<<<Related Works>>>
Skip-reading BIBREF6, BIBREF7, BIBREF8 is an existing setting in which MRC models read partial documents. Concretely, these methods assume that not all tokens in the input sequence are useful, and therefore learn to skip irrelevant tokens based on the current input and their internal memory. Since skipping decisions are discrete, the models are often optimized by the REINFORCE algorithm BIBREF9. For example, the structural-jump-LSTM proposed in BIBREF10 learns to skip and jump over chunks of text. In a similar vein, BIBREF11 designed a QA task where the model reads streaming data unidirectionally, without knowing when the question will be provided. Skip-reading approaches are limited in that they only consider jumping over a few consecutive tokens and the skipping operations are usually unidirectional. Based on the assumption that a single pass of reading may not provide sufficient information, multi-pass reading methods have also been studied BIBREF12, BIBREF13.
Compared to skip-reading and multi-turn reading, our work enables an agent to jump through a document in a more dynamic manner, in some sense combining aspects of skip-reading and re-reading. For example, it can jump forward, backward, or to an arbitrary position, depending on the query. This also distinguishes the model we develop in this work from ReasoNet BIBREF13, where an agent decides when to stop unidirectional reading.
Recently, BIBREF14 propose DocQN, which is a DQN-based agent that leverages the (tree) structure of documents and navigates across sentences and paragraphs. The proposed method has been shown to outperform vanilla DQN and IR baselines on TriviaQA dataset. The main differences between our work and DocQA include: iMRC does not depend on extra meta information of documents (e.g., title, paragraph title) for building document trees as in DocQN; our proposed environment is partially-observable, and thus an agent is required to explore and memorize the environment via interaction; the action space in our setting (especially for the Ctrl+F command as defined in later section) is arguably larger than the tree sampling action space in DocQN.
Closely related to iMRC is work by BIBREF15, in which the authors introduce a collection of synthetic tasks to train and test information-seeking capabilities in neural models. We extend that work by developing a realistic and challenging text-based task.
Broadly speaking, our approach is also linked to the optimal stopping problem in the literature Markov decision processes (MDP) BIBREF16, where at each time-step the agent either continues or stops and accumulates reward. Here, we reformulate conventional QA tasks through the lens of optimal stopping, in hopes of improving over the shallow matching behaviors exhibited by many MRC systems.
<<</Related Works>>>
<<<iMRC: Making MRC Interactive>>>
We build the iSQuAD and iNewsQA datasets based on SQuAD v1.1 BIBREF0 and NewsQA BIBREF1. Both original datasets share similar properties. Specifically, every data-point consists of a tuple, $\lbrace p, q, a\rbrace $, where $p$ represents a paragraph, $q$ a question, and $a$ is the answer. The answer is a word span defined by head and tail positions in $p$. NewsQA is more difficult than SQuAD because it has a larger vocabulary, more difficult questions, and longer source documents.
We first split every paragraph $p$ into a list of sentences $\mathcal {S} = \lbrace s_1, s_2, ..., s_n\rbrace $, where $n$ stands for number of sentences in $p$. Given a question $q$, rather than showing the entire paragraph $p$, we only show an agent the first sentence $s_1$ and withhold the rest. The agent must issue commands to reveal the hidden sentences progressively and thereby gather the information needed to answer question $q$.
An agent decides when to stop interacting and output an answer, but the number of interaction steps is limited. Once an agent has exhausted its step budget, it is forced to answer the question.
<<<Interactive MRC as a POMDP>>>
As described in the previous section, we convert MRC tasks into sequential decision-making problems (which we will refer to as games). These can be described naturally within the reinforcement learning (RL) framework. Formally, tasks in iMRC are partially observable Markov decision processes (POMDP) BIBREF17. An iMRC data-point is a discrete-time POMDP defined by $(S, T, A, \Omega , O, R, \gamma )$, where $\gamma \in [0, 1]$ is the discount factor and the other elements are described in detail below.
Environment States ($S$): The environment state at turn $t$ in the game is $s_t \in S$. It contains the complete internal information of the game, much of which is hidden from the agent. When an agent issues an action $a_t$, the environment transitions to state $s_{t+1}$ with probability $T(s_{t+1} | s_t, a_t)$). In this work, transition probabilities are either 0 or 1 (i.e., deterministic environment).
Actions ($A$): At each game turn $t$, the agent issues an action $a_t \in A$. We will elaborate on the action space of iMRC in the action space section.
Observations ($\Omega $): The text information perceived by the agent at a given game turn $t$ is the agent's observation, $o_t \in \Omega $, which depends on the environment state and the previous action with probability $O(o_t|s_t)$. In this work, observation probabilities are either 0 or 1 (i.e., noiseless observation). Reward Function ($R$): Based on its actions, the agent receives rewards $r_t = R(s_t, a_t)$. Its objective is to maximize the expected discounted sum of rewards $E \left[\sum _t \gamma ^t r_t \right]$.
<<</Interactive MRC as a POMDP>>>
<<<Action Space>>>
To better describe the action space of iMRC, we split an agent's actions into two phases: information gathering and question answering. During the information gathering phase, the agent interacts with the environment to collect knowledge. It answers questions with its accumulated knowledge in the question answering phase.
Information Gathering: At step $t$ of the information gathering phase, the agent can issue one of the following four actions to interact with the paragraph $p$, where $p$ consists of $n$ sentences and where the current observation corresponds to sentence $s_k,~1 \le k \le n$:
previous: jump to $ \small {\left\lbrace \begin{array}{ll} s_n & \text{if $k = 1$,}\\ s_{k-1} & \text{otherwise;} \end{array}\right.} $
next: jump to $ \small {\left\lbrace \begin{array}{ll} s_1 & \text{if $k = n$,}\\ s_{k+1} & \text{otherwise;} \end{array}\right.} $
Ctrl+F $<$query$>$: jump to the sentence that contains the next occurrence of “query”;
stop: terminate information gathering phase.
Question Answering: We follow the output format of both SQuAD and NewsQA, where an agent is required to point to the head and tail positions of an answer span within $p$. Assume that at step $t$ the agent stops interacting and the observation $o_t$ is $s_k$. The agent points to a head-tail position pair in $s_k$.
<<</Action Space>>>
<<<Query Types>>>
Given the question “When is the deadline of AAAI?”, as a human, one might try searching “AAAI” on a search engine, follow the link to the official AAAI website, then search for keywords “deadline” or “due date” on the website to jump to a specific paragraph. Humans have a deep understanding of questions because of their significant background knowledge. As a result, the keywords they use to search are not limited to what appears in the question.
Inspired by this observation, we study 3 query types for the Ctrl+F $<$query$>$ command.
One token from the question: the setting with smallest action space. Because iMRC deals with Ctrl+F commands by exact string matching, there is no guarantee that all sentences are accessible from question tokens only.
One token from the union of the question and the current observation: an intermediate level where the action space is larger.
One token from the dataset vocabulary: the action space is huge (see Table TABREF16 for statistics of SQuAD and NewsQA). It is guaranteed that all sentences in all documents are accessible through these tokens.
<<</Query Types>>>
<<<Evaluation Metric>>>
Since iMRC involves both MRC and RL, we adopt evaluation metrics from both settings. First, as a question answering task, we use $\text{F}_1$ score to compare predicted answers against ground-truth, as in previous works. When there exist multiple ground-truth answers, we report the max $\text{F}_1$ score. Second, mastering multiple games remains quite challenging for RL agents. Therefore, we evaluate an agent's performance during both its training and testing phases. During training, we report training curves averaged over 3 random seeds. During test, we follow common practice in supervised learning tasks where we report the agent's test performance corresponding to its best validation performance .
<<</Evaluation Metric>>>
<<</iMRC: Making MRC Interactive>>>
<<<Baseline Agent>>>
As a baseline, we propose QA-DQN, an agent that adopts components from QANet BIBREF18 and adds an extra command generation module inspired by LSTM-DQN BIBREF19.
As illustrated in Figure FIGREF6, the agent consists of three components: an encoder, an action generator, and a question answerer. More precisely, at a game step $t$, the encoder reads observation string $o_t$ and question string $q$ to generate attention aggregated hidden representations $M_t$. Using $M_t$, the action generator outputs commands (defined in previous sections) to interact with iMRC. If the generated command is stop or the agent is forced to stop, the question answerer takes the current information at game step $t$ to generate head and tail pointers for answering the question; otherwise, the information gathering procedure continues.
In this section, we describe the high-level model structure and training strategies of QA-DQN. We refer readers to BIBREF18 for detailed information. We will release datasets and code in the near future.
<<<Model Structure>>>
In this section, we use game step $t$ to denote one round of interaction between an agent with the iMRC environment. We use $o_t$ to denote text observation at game step $t$ and $q$ to denote question text. We use $L$ to refer to a linear transformation. $[\cdot ;\cdot ]$ denotes vector concatenation.
<<<Encoder>>>
The encoder consists of an embedding layer, two stacks of transformer blocks (denoted as encoder transformer blocks and aggregation transformer blocks), and an attention layer.
In the embedding layer, we aggregate both word- and character-level embeddings. Word embeddings are initialized by the 300-dimension fastText BIBREF20 vectors trained on Common Crawl (600B tokens), and are fixed during training. Character embeddings are initialized by 200-dimension random vectors. A convolutional layer with 96 kernels of size 5 is used to aggregate the sequence of characters. We use a max pooling layer on the character dimension, then a multi-layer perceptron (MLP) of size 96 is used to aggregate the concatenation of word- and character-level representations. A highway network BIBREF21 is used on top of this MLP. The resulting vectors are used as input to the encoding transformer blocks.
Each encoding transformer block consists of four convolutional layers (with shared weights), a self-attention layer, and an MLP. Each convolutional layer has 96 filters, each kernel's size is 7. In the self-attention layer, we use a block hidden size of 96 and a single head attention mechanism. Layer normalization and dropout are applied after each component inside the block. We add positional encoding into each block's input. We use one layer of such an encoding block.
At a game step $t$, the encoder processes text observation $o_t$ and question $q$ to generate context-aware encodings $h_{o_t} \in \mathbb {R}^{L^{o_t} \times H_1}$ and $h_q \in \mathbb {R}^{L^{q} \times H_1}$, where $L^{o_t}$ and $L^{q}$ denote length of $o_t$ and $q$ respectively, $H_1$ is 96.
Following BIBREF18, we use a context-query attention layer to aggregate the two representations $h_{o_t}$ and $h_q$. Specifically, the attention layer first uses two MLPs to map $h_{o_t}$ and $h_q$ into the same space, with the resulting representations denoted as $h_{o_t}^{\prime } \in \mathbb {R}^{L^{o_t} \times H_2}$ and $h_q^{\prime } \in \mathbb {R}^{L^{q} \times H_2}$, in which, $H_2$ is 96.
Then, a tri-linear similarity function is used to compute the similarities between each pair of $h_{o_t}^{\prime }$ and $h_q^{\prime }$ items:
where $\odot $ indicates element-wise multiplication and $w$ is trainable parameter vector of size 96.
We apply softmax to the resulting similarity matrix $S$ along both dimensions, producing $S^A$ and $S^B$. Information in the two representations are then aggregated as
where $h_{oq}$ is aggregated observation representation.
On top of the attention layer, a stack of aggregation transformer blocks is used to further map the observation representations to action representations and answer representations. The configuration parameters are the same as the encoder transformer blocks, except there are two convolution layers (with shared weights), and the number of blocks is 7.
Let $M_t \in \mathbb {R}^{L^{o_t} \times H_3}$ denote the output of the stack of aggregation transformer blocks, in which $H_3$ is 96.
<<</Encoder>>>
<<<Action Generator>>>
The action generator takes $M_t$ as input and estimates Q-values for all possible actions. As described in previous section, when an action is a Ctrl+F command, it is composed of two tokens (the token “Ctrl+F” and the query token). Therefore, the action generator consists of three MLPs:
Here, the size of $L_{shared} \in \mathbb {R}^{95 \times 150}$; $L_{action}$ has an output size of 4 or 2 depending on the number of actions available; the size of $L_{ctrlf}$ is the same as the size of a dataset's vocabulary size (depending on different query type settings, we mask out words in the vocabulary that are not query candidates). The overall Q-value is simply the sum of the two components:
<<</Action Generator>>>
<<<Question Answerer>>>
Following BIBREF18, we append two extra stacks of aggregation transformer blocks on top of the encoder to compute head and tail positions:
Here, $M_{head}$ and $M_{tail}$ are outputs of the two extra transformer stacks, $L_0$, $L_1$, $L_2$ and $L_3$ are trainable parameters with output size 150, 150, 1 and 1, respectively.
<<</Question Answerer>>>
<<</Model Structure>>>
<<<Memory and Reward Shaping>>>
<<<Memory>>>
In iMRC, some questions may not be easily answerable based only on observation of a single sentence. To overcome this limitation, we provide an explicit memory mechanism to QA-DQN. Specifically, we use a queue to store strings that have been observed recently. The queue has a limited size of slots (we use queues of size [1, 3, 5] in this work). This prevents the agent from issuing next commands until the environment has been observed fully, in which case our task would degenerate to the standard MRC setting. The memory slots are reset episodically.
<<</Memory>>>
<<<Reward Shaping>>>
Because the question answerer in QA-DQN is a pointing model, its performance relies heavily on whether the agent can find and stop at the sentence that contains the answer. We design a heuristic reward to encourage and guide this behavior. In particular, we assign a reward if the agent halts at game step $k$ and the answer is a sub-string of $o_k$ (if larger memory slots are used, we assign this reward if the answer is a sub-string of the memory at game step $k$). We denote this reward as the sufficient information reward, since, if an agent sees the answer, it should have a good chance of having gathered sufficient information for the question (although this is not guaranteed).
Note this sufficient information reward is part of the design of QA-DQN, whereas the question answering score is the only metric used to evaluate an agent's performance on the iMRC task.
<<</Reward Shaping>>>
<<<Ctrl+F Only Mode>>>
As mentioned above, an agent might bypass Ctrl+F actions and explore an iMRC game only via next commands. We study this possibility in an ablation study, where we limit the agent to the Ctrl+F and stop commands. In this setting, an agent is forced to explore by means of search a queries.
<<</Ctrl+F Only Mode>>>
<<</Memory and Reward Shaping>>>
<<<Training Strategy>>>
In this section, we describe our training strategy. We split the training pipeline into two parts for easy comprehension. We use Adam BIBREF22 as the step rule for optimization in both parts, with the learning rate set to 0.00025.
<<<Action Generation>>>
iMRC games are interactive environments. We use an RL training algorithm to train the interactive information-gathering behavior of QA-DQN. We adopt the Rainbow algorithm proposed by BIBREF23, which integrates several extensions to the original Deep Q-Learning algorithm BIBREF24. Rainbox exhibits state-of-the-art performance on several RL benchmark tasks (e.g., Atari games).
During game playing, we use a mini-batch of size 10 and push all transitions (observation string, question string, generated command, reward) into a replay buffer of size 500,000. We do not compute losses directly using these transitions. After every 5 game steps, we randomly sample a mini-batch of 64 transitions from the replay buffer, compute loss, and update the network.
Detailed hyper-parameter settings for action generation are shown in Table TABREF38.
<<</Action Generation>>>
<<<Question Answering>>>
Similarly, we use another replay buffer to store question answering transitions (observation string when interaction stops, question string, ground-truth answer).
Because both iSQuAD and iNewsQA are converted from datasets that provide ground-truth answer positions, we can leverage this information and train the question answerer with supervised learning. Specifically, we only push question answering transitions when the ground-truth answer is in the observation string. For each transition, we convert the ground-truth answer head- and tail-positions from the SQuAD and NewsQA datasets to positions in the current observation string. After every 5 game steps, we randomly sample a mini-batch of 64 transitions from the replay buffer and train the question answerer using the Negative Log-Likelihood (NLL) loss. We use a dropout rate of 0.1.
<<</Question Answering>>>
<<</Training Strategy>>>
<<</Baseline Agent>>>
<<<Experimental Results>>>
In this study, we focus on three factors and their effects on iMRC and the performance of the QA-DQN agent:
different Ctrl+F strategies, as described in the action space section;
enabled vs. disabled next and previous actions;
different memory slot sizes.
Below we report the baseline agent's training performance followed by its generalization performance on test data.
<<<Mastering Training Games>>>
It remains difficult for RL agents to master multiple games at the same time. In our case, each document-question pair can be considered a unique game, and there are hundred of thousands of them. Therefore, as is common practice in the RL literature, we study an agent's training curves.
Due to the space limitations, we select several representative settings to discuss in this section and provide QA-DQN's training and evaluation curves for all experimental settings in the Appendix. We provide the agent's sufficient information rewards (i.e., if the agent stopped at a state where the observation contains the answer) during training in Appendix as well.
Figure FIGREF36 shows QA-DQN's training performance ($\text{F}_1$ score) when next and previous actions are available. Figure FIGREF40 shows QA-DQN's training performance ($\text{F}_1$ score) when next and previous actions are disabled. Note that all training curves are averaged over 3 runs with different random seeds and all evaluation curves show the one run with max validation performance among the three.
From Figure FIGREF36, we can see that the three Ctrl+F strategies show similar difficulty levels when next and previous are available, although QA-DQN works slightly better when selecting a word from the question as query (especially on iNewsQA). However, from Figure FIGREF40 we observe that when next and previous are disabled, QA-DQN shows significant advantage when selecting a word from the question as query. This may due to the fact that when an agent must use Ctrl+F to navigate within documents, the set of question words is a much smaller action space in contrast to the other two settings. In the 4-action setting, an agent can rely on issuing next and previous actions to reach any sentence in a document.
The effect of action space size on model performance is particularly clear when using a datasets' entire vocabulary as query candidates in the 2-action setting. From Figure FIGREF40 (and figures with sufficient information rewards in the Appendix) we see QA-DQN has a hard time learning in this setting. As shown in Table TABREF16, both datasets have a vocabulary size of more than 100k. This is much larger than in the other two settings, where on average the length of questions is around 10. This suggests that the methods with better sample efficiency are needed to act in more realistic problem settings with huge action spaces.
Experiments also show that a larger memory slot size always helps. Intuitively, with a memory mechanism (either implicit or explicit), an agent could make the environment closer to fully observed by exploring and memorizing observations. Presumably, a larger memory may further improve QA-DQN's performance, but considering the average number of sentences in each iSQuAD game is 5, a memory with more than 5 slots will defeat the purpose of our study of partially observable text environments.
Not surprisingly, QA-DQN performs worse in general on iNewsQA, in all experiments. As shown in Table TABREF16, the average number of sentences per document in iNewsQA is about 6 times more than in iSQuAD. This is analogous to games with larger maps in the RL literature, where the environment is partially observable. A better exploration (in our case, jumping) strategy may help QA-DQN to master such harder games.
<<</Mastering Training Games>>>
<<<Generalizing to Test Set>>>
To study QA-DQN's ability to generalize, we select the best performing agent in each experimental setting on the validation set and report their performance on the test set. The agent's test performance is reported in Table TABREF41. In addition, to support our claim that the challenging part of iMRC tasks is information seeking rather than answering questions given sufficient information, we also report the $\text{F}_1$ score of an agent when it has reached the piece of text that contains the answer, which we denote as $\text{F}_{1\text{info}}$.
From Table TABREF41 (and validation curves provided in appendix) we can observe that QA-DQN's performance during evaluation matches its training performance in most settings. $\text{F}_{1\text{info}}$ scores are consistently higher than the overall $\text{F}_1$ scores, and they have much less variance across different settings. This supports our hypothesis that information seeking play an important role in solving iMRC tasks, whereas question answering given necessary information is relatively straightforward. This also suggests that an interactive agent that can better navigate to important sentences is very likely to achieve better performance on iMRC tasks.
<<</Generalizing to Test Set>>>
<<</Experimental Results>>>
<<<Discussion and Future Work>>>
In this work, we propose and explore the direction of converting MRC datasets into interactive environments. We believe interactive, information-seeking behavior is desirable for neural MRC systems when knowledge sources are partially observable and/or too large to encode in their entirety — for instance, when searching for information on the internet, where knowledge is by design easily accessible to humans through interaction.
Despite being restricted, our proposed task presents major challenges to existing techniques. iMRC lies at the intersection of NLP and RL, which is arguably less studied in existing literature. We hope to encourage researchers from both NLP and RL communities to work toward solving this task.
For our baseline, we adopted an off-the-shelf, top-performing MRC model and RL method. Either component can be replaced straightforwardly with other methods (e.g., to utilize a large-scale pretrained language model).
Our proposed setup and baseline agent presently use only a single word with the query command. However, a host of other options should be considered in future work. For example, multi-word queries with fuzzy matching are more realistic. It would also be interesting for an agent to generate a vector representation of the query in some latent space. This vector could then be compared with precomputed document representations (e.g., in an open domain QA dataset) to determine what text to observe next, with such behavior tantamount to learning to do IR.
As mentioned, our idea for reformulating existing MRC datasets as partially observable and interactive environments is straightforward and general. Almost all MRC datasets can be used to study interactive, information-seeking behavior through similar modifications. We hypothesize that such behavior can, in turn, help in solving real-world MRC problems involving search.
<<</Discussion and Future Work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Works\niMRC: Making MRC Interactive\nInteractive MRC as a POMDP\nAction Space\nQuery Types\nEvaluation Metric\nBaseline Agent\nModel Structure\nEncoder\nAction Generator\nQuestion Answerer\nMemory and Reward Shaping\nMemory\nReward Shaping\nCtrl+F Only Mode\nTraining Strategy\nAction Generation\nQuestion Answering\nExperimental Results\nMastering Training Games\nGeneralizing to Test Set\nDiscussion and Future Work"
],
"type": "outline"
}
|
1910.03814
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Exploring Hate Speech Detection in Multimodal Publications
<<<Abstract>>>
In this work we target the problem of hate speech detection in multimodal publications formed by a text and an image. We gather and annotate a large scale dataset from Twitter, MMHS150K, and propose different models that jointly analyze textual and visual information for hate speech detection, comparing them with unimodal detection. We provide quantitative and qualitative results and analyze the challenges of the proposed task. We find that, even though images are useful for the hate speech detection task, current multimodal models cannot outperform models analyzing only text. We discuss why and open the field and the dataset for further research.
<<</Abstract>>>
<<<Introduction>>>
Social Media platforms such as Facebook, Twitter or Reddit have empowered individuals' voices and facilitated freedom of expression. However they have also been a breeding ground for hate speech and other types of online harassment. Hate speech is defined in legal literature as speech (or any form of expression) that expresses (or seeks to promote, or has the capacity to increase) hatred against a person or a group of people because of a characteristic they share, or a group to which they belong BIBREF0. Twitter develops this definition in its hateful conduct policy as violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.
In this work we focus on hate speech detection. Due to the inherent complexity of this task, it is important to distinguish hate speech from other types of online harassment. In particular, although it might be offensive to many people, the sole presence of insulting terms does not itself signify or convey hate speech. And, the other way around, hate speech may denigrate or threaten an individual or a group of people without the use of any profanities. People from the african-american community, for example, often use the term nigga online, in everyday language, without malicious intentions to refer to folks within their community, and the word cunt is often used in non hate speech publications and without any sexist purpose. The goal of this work is not to discuss if racial slur, such as nigga, should be pursued. The goal is to distinguish between publications using offensive terms and publications attacking communities, which we call hate speech.
Modern social media content usually include images and text. Some of these multimodal publications are only hate speech because of the combination of the text with a certain image. That is because, as we have stated, the presence of offensive terms does not itself signify hate speech, and the presence of hate speech is often determined by the context of a publication. Moreover, users authoring hate speech tend to intentionally construct publications where the text is not enough to determine they are hate speech. This happens especially in Twitter, where multimodal tweets are formed by an image and a short text, which in many cases is not enough to judge them. In those cases, the image might give extra context to make a proper judgement. Fig. FIGREF5 shows some of such examples in MMHS150K.
The contributions of this work are as follows:
[noitemsep,leftmargin=*]
We propose the novel task of hate speech detection in multimodal publications, collect, annotate and publish a large scale dataset.
We evaluate state of the art multimodal models on this specific task and compare their performance with unimodal detection. Even though images are proved to be useful for hate speech detection, the proposed multimodal models do not outperform unimodal textual models.
We study the challenges of the proposed task, and open the field for future research.
<<</Introduction>>>
<<<Related Work>>>
<<<Hate Speech Detection>>>
The literature on detecting hate speech on online textual publications is extensive. Schmidt and Wiegand BIBREF1 recently provided a good survey of it, where they review the terminology used over time, the features used, the existing datasets and the different approaches. However, the field lacks a consistent dataset and evaluation protocol to compare proposed methods. Saleem et al. BIBREF2 compare different classification methods detecting hate speech in Reddit and other forums. Wassem and Hovy BIBREF3 worked on hate speech detection on twitter, published a manually annotated dataset and studied its hate distribution. Later Wassem BIBREF4 extended the previous published dataset and compared amateur and expert annotations, concluding that amateur annotators are more likely than expert annotators to label items as hate speech. Park and Fung BIBREF5 worked on Wassem datasets and proposed a classification method using a CNN over Word2Vec BIBREF6 word embeddings, showing also classification results on racism and sexism hate sub-classes. Davidson et al. BIBREF7 also worked on hate speech detection on twitter, publishing another manually annotated dataset. They test different classifiers such as SVMs and decision trees and provide a performance comparison. Malmasi and Zampieri BIBREF8 worked on Davidson's dataset improving his results using more elaborated features. ElSherief et al. BIBREF9 studied hate speech on twitter and selected the most frequent terms in hate tweets based on Hatebase, a hate expression repository. They propose a big hate dataset but it lacks manual annotations, and all the tweets containing certain hate expressions are considered hate speech. Zhang et al. BIBREF10 recently proposed a more sophisticated approach for hate speech detection, using a CNN and a GRU BIBREF11 over Word2Vec BIBREF6 word embeddings. They show experiments in different datasets outperforming previous methods. Next, we summarize existing hate speech datasets:
[noitemsep,leftmargin=*]
RM BIBREF10: Formed by $2,435$ tweets discussing Refugees and Muslims, annotated as hate or non-hate.
DT BIBREF7: Formed by $24,783$ tweets annotated as hate, offensive language or neither. In our work, offensive language tweets are considered as non-hate.
WZ-LS BIBREF5: A combination of Wassem datasets BIBREF4, BIBREF3 labeled as racism, sexism, neither or both that make a total of $18,624$ tweets.
Semi-Supervised BIBREF9: Contains $27,330$ general hate speech Twitter tweets crawled in a semi-supervised manner.
Although often modern social media publications include images, not too many contributions exist that exploit visual information. Zhong et al. BIBREF12 worked on classifying Instagram images as potential cyberbullying targets, exploiting both the image content, the image caption and the comments. However, their visual information processing is limited to the use of features extracted by a pre-trained CNN, the use of which does not achieve any improvement. Hosseinmardi et al. BIBREF13 also address the problem of detecting cyberbullying incidents on Instagram exploiting both textual and image content. But, again, their visual information processing is limited to use the features of a pre-trained CNN, and the improvement when using visual features on cyberbullying classification is only of 0.01%.
<<</Hate Speech Detection>>>
<<<Visual and Textual Data Fusion>>>
A typical task in multimodal visual and textual analysis is to learn an alignment between feature spaces. To do that, usually a CNN and a RNN are trained jointly to learn a joint embedding space from aligned multimodal data. This approach is applied in tasks such as image captioning BIBREF14, BIBREF15 and multimodal image retrieval BIBREF16, BIBREF17. On the other hand, instead of explicitly learning an alignment between two spaces, the goal of Visual Question Answering (VQA) is to merge both data modalities in order to decide which answer is correct. This problem requires modeling very precise correlations between the image and the question representations. The VQA task requirements are similar to our hate speech detection problem in multimodal publications, where we have a visual and a textual input and we need to combine both sources of information to understand the global context and make a decision. We thus take inspiration from the VQA literature for the tested models. Early VQA methods BIBREF18 fuse textual and visual information by feature concatenation. Later methods, such as Multimodal Compact Bilinear pooling BIBREF19, utilize bilinear pooling to learn multimodal features. An important limitation of these methods is that the multimodal features are fused in the latter model stage, so the textual and visual relationships are modeled only in the last layers. Another limitation is that the visual features are obtained by representing the output of the CNN as a one dimensional vector, which losses the spatial information of the input images. In a recent work, Gao et al. BIBREF20 propose a feature fusion scheme to overcome these limitations. They learn convolution kernels from the textual information –which they call question-guided kernels– and convolve them with the visual information in an earlier stage to get the multimodal features. Margffoy-Tuay et al. BIBREF21 use a similar approach to combine visual and textual information, but they address a different task: instance segmentation guided by natural language queries. We inspire in these latest feature fusion works to build the models for hate speech detection.
<<</Visual and Textual Data Fusion>>>
<<</Related Work>>>
<<<The MMHS150K dataset>>>
Existing hate speech datasets contain only textual data. Moreover, a reference benchmark does not exists. Most of the published datasets are crawled from Twitter and distributed as tweet IDs but, since Twitter removes reported user accounts, an important amount of their hate tweets is no longer accessible. We create a new manually annotated multimodal hate speech dataset formed by $150,000$ tweets, each one of them containing text and an image. We call the dataset MMHS150K, and made it available online . In this section, we explain the dataset creation steps.
<<<Tweets Gathering>>>
We used the Twitter API to gather real-time tweets from September 2018 until February 2019, selecting the ones containing any of the 51 Hatebase terms that are more common in hate speech tweets, as studied in BIBREF9. We filtered out retweets, tweets containing less than three words and tweets containing porn related terms. From that selection, we kept the ones that included images and downloaded them. Twitter applies hate speech filters and other kinds of content control based on its policy, although the supervision is based on users' reports. Therefore, as we are gathering tweets from real-time posting, the content we get has not yet passed any filter.
<<</Tweets Gathering>>>
<<<Textual Image Filtering>>>
We aim to create a multimodal hate speech database where all the instances contain visual and textual information that we can later process to determine if a tweet is hate speech or not. But a considerable amount of the images of the selected tweets contain only textual information, such as screenshots of other tweets. To ensure that all the dataset instances contain both visual and textual information, we remove those tweets. To do that, we use TextFCN BIBREF22, BIBREF23 , a Fully Convolutional Network that produces a pixel wise text probability map of an image. We set empirical thresholds to discard images that have a substantial total text probability, filtering out $23\%$ of the collected tweets.
<<</Textual Image Filtering>>>
<<<Annotation>>>
We annotate the gathered tweets using the crowdsourcing platform Amazon Mechanical Turk. There, we give the workers the definition of hate speech and show some examples to make the task clearer. We then show the tweet text and image and we ask them to classify it in one of 6 categories: No attacks to any community, racist, sexist, homophobic, religion based attacks or attacks to other communities. Each one of the $150,000$ tweets is labeled by 3 different workers to palliate discrepancies among workers.
We received a lot of valuable feedback from the annotators. Most of them had understood the task correctly, but they were worried because of its subjectivity. This is indeed a subjective task, highly dependent on the annotator convictions and sensitivity. However, we expect to get cleaner annotations the more strong the attack is, which are the publications we are more interested on detecting. We also detected that several users annotate tweets for hate speech just by spotting slur. As already said previously, just the use of particular words can be offensive to many people, but this is not the task we aim to solve. We have not included in our experiments those hits that were made in less than 3 seconds, understanding that it takes more time to grasp the multimodal context and make a decision.
We do a majority voting between the three annotations to get the tweets category. At the end, we obtain $112,845$ not hate tweets and $36,978$ hate tweets. The latest are divided in $11,925$ racist, $3,495$ sexist, $3,870$ homophobic, 163 religion-based hate and $5,811$ other hate tweets (Fig. FIGREF17). In this work, we do not use hate sub-categories, and stick to the hate / not hate split. We separate balanced validation ($5,000$) and test ($10,000$) sets. The remaining tweets are used for training.
We also experimented using hate scores for each tweet computed given the different votes by the three annotators instead of binary labels. The results did not present significant differences to those shown in the experimental part of this work, but the raw annotations will be published nonetheless for further research.
As far as we know, this dataset is the biggest hate speech dataset to date, and the first multimodal hate speech dataset. One of its challenges is to distinguish between tweets using the same key offensive words that constitute or not an attack to a community (hate speech). Fig. FIGREF18 shows the percentage of hate and not hate tweets of the top keywords.
<<</Annotation>>>
<<</The MMHS150K dataset>>>
<<<Methodology>>>
<<<Unimodal Treatment>>>
<<<Images.>>>
All images are resized such that their shortest size has 500 pixels. During training, online data augmentation is applied as random cropping of $299\times 299$ patches and mirroring. We use a CNN as the image features extractor which is an Imagenet BIBREF24 pre-trained Google Inception v3 architecture BIBREF25. The fine-tuning process of the Inception v3 layers aims to modify its weights to extract the features that, combined with the textual information, are optimal for hate speech detection.
<<</Images.>>>
<<<Tweet Text.>>>
We train a single layer LSTM with a 150-dimensional hidden state for hate / not hate classification. The input dimensionality is set to 100 and GloVe BIBREF26 embeddings are used as word input representations. Since our dataset is not big enough to train a GloVe word embedding model, we used a pre-trained model that has been trained in two billion tweets. This ensures that the model will be able to produce word embeddings for slang and other words typically used in Twitter. To process the tweets text before generating the word embeddings, we use the same pipeline as the model authors, which includes generating symbols to encode Twitter special interactions such as user mentions (@user) or hashtags (#hashtag). To encode the tweet text and input it later to multimodal models, we use the LSTM hidden state after processing the last tweet word. Since the LSTM has been trained for hate speech classification, it extracts the most useful information for this task from the text, which is encoded in the hidden state after inputting the last tweet word.
<<</Tweet Text.>>>
<<<Image Text.>>>
The text in the image can also contain important information to decide if a publication is hate speech or not, so we extract it and also input it to our model. To do so, we use Google Vision API Text Detection module BIBREF27. We input the tweet text and the text from the image separately to the multimodal models, so it might learn different relations between them and between them and the image. For instance, the model could learn to relate the image text with the area in the image where the text appears, so it could learn to interpret the text in a different way depending on the location where it is written in the image. The image text is also encoded by the LSTM as the hidden state after processing its last word.
<<</Image Text.>>>
<<</Unimodal Treatment>>>
<<<Multimodal Architectures>>>
The objective of this work is to build a hate speech detector that leverages both textual and visual data and detects hate speech publications based on the context given by both data modalities. To study how the multimodal context can boost the performance compared to an unimodal context we evaluate different models: a Feature Concatenation Model (FCM), a Spatial Concatenation Model (SCM) and a Textual Kernels Model (TKM). All of them are CNN+RNN models with three inputs: the tweet image, the tweet text and the text appearing in the image (if any).
<<<Feature Concatenation Model (FCM)>>>
The image is fed to the Inception v3 architecture and the 2048 dimensional feature vector after the last average pooling layer is used as the visual representation. This vector is then concatenated with the 150 dimension vectors of the LSTM last word hidden states of the image text and the tweet text, resulting in a 2348 feature vector. This vector is then processed by three fully connected layers of decreasing dimensionality $(2348, 1024, 512)$ with following corresponding batch normalization and ReLu layers until the dimensions are reduced to two, the number of classes, in the last classification layer. The FCM architecture is illustrated in Fig. FIGREF26.
<<</Feature Concatenation Model (FCM)>>>
<<<Spatial Concatenation Model (SCM)>>>
Instead of using the latest feature vector before classification of the Inception v3 as the visual representation, in the SCM we use the $8\times 8\times 2048$ feature map after the last Inception module. Then we concatenate the 150 dimension vectors encoding the tweet text and the tweet image text at each spatial location of that feature map. The resulting multimodal feature map is processed by two Inception-E blocks BIBREF28. After that, dropout and average pooling are applied and, as in the FCM model, three fully connected layers are used to reduce the dimensionality until the classification layer.
<<</Spatial Concatenation Model (SCM)>>>
<<<Textual Kernels Model (TKM)>>>
The TKM design, inspired by BIBREF20 and BIBREF21, aims to capture interactions between the two modalities more expressively than concatenation models. As in SCM we use the $8\times 8\times 2048$ feature map after the last Inception module as the visual representation. From the 150 dimension vector encoding the tweet text, we learn $K_t$ text dependent kernels using independent fully connected layers that are trained together with the rest of the model. The resulting $K_t$ text dependent kernels will have dimensionality of $1\times 1\times 2048$. We do the same with the feature vector encoding the image text, learning $K_{it}$ kernels. The textual kernels are convolved with the visual feature map in the channel dimension at each spatial location, resulting in a $8\times 8\times (K_i+K_{it})$ multimodal feature map, and batch normalization is applied. Then, as in the SCM, the 150 dimension vectors encoding the tweet text and the tweet image text are concatenated at each spatial dimension. The rest of the architecture is the same as in SCM: two Inception-E blocks, dropout, average pooling and three fully connected layers until the classification layer. The number of tweet textual kernels $K_t$ and tweet image textual kernels $K_it$ is set to $K_t = 10$ and $K_it = 5$. The TKM architecture is illustrated in Fig. FIGREF29.
<<</Textual Kernels Model (TKM)>>>
<<<Training>>>
We train the multimodal models with a Cross-Entropy loss with Softmax activations and an ADAM optimizer with an initial learning rate of $1e-4$. Our dataset suffers from a high class imbalance, so we weight the contribution to the loss of the samples to totally compensate for it. One of the goals of this work is to explore how every one of the inputs contributes to the classification and to prove that the proposed model can learn concurrences between visual and textual data useful to improve the hate speech classification results on multimodal data. To do that we train different models where all or only some inputs are available. When an input is not available, we set it to zeros, and we do the same when an image has no text.
<<</Training>>>
<<</Multimodal Architectures>>>
<<</Methodology>>>
<<<Results>>>
Table TABREF31 shows the F-score, the Area Under the ROC Curve (AUC) and the mean accuracy (ACC) of the proposed models when different inputs are available. $TT$ refers to the tweet text, $IT$ to the image text and $I$ to the image. It also shows results for the LSTM, for the Davison method proposed in BIBREF7 trained with MMHS150K, and for random scores. Fig. FIGREF32 shows the Precision vs Recall plot and the ROC curve (which plots the True Positive Rate vs the False Positive Rate) of the different models.
First, notice that given the subjectivity of the task and the discrepancies between annotators, getting optimal scores in the evaluation metrics is virtually impossible. However, a system with relatively low metric scores can still be very useful for hate speech detection in a real application: it will fire on publications for which most annotators agree they are hate, which are often the stronger attacks. The proposed LSTM to detect hate speech when only text is available, gets similar results as the method presented in BIBREF7, which we trained with MMHS150K and the same splits. However, more than substantially advancing the state of the art on hate speech detection in textual publications, our key purpose in this work is to introduce and work on its detection on multimodal publications. We use LSTM because it provides a strong representation of the tweet texts.
The FCM trained only with images gets decent results, considering that in many publications the images might not give any useful information for the task. Fig. FIGREF33 shows some representative examples of the top hate and not hate scored images of this model. Many hate tweets are accompanied by demeaning nudity images, being sexist or homophobic. Other racist tweets are accompanied by images caricaturing black people. Finally, MEMES are also typically used in hate speech publications. The top scored images for not hate are portraits of people belonging to minorities. This is due to the use of slur inside these communities without an offensive intention, such as the word nigga inside the afro-american community or the word dyke inside the lesbian community. These results show that images can be effectively used to discriminate between offensive and non-offensive uses of those words.
Despite the model trained only with images proves that they are useful for hate speech detection, the proposed multimodal models are not able to improve the detection compared to the textual models. Besides the different architectures, we have tried different training strategies, such as initializing the CNN weights with a model already trained solely with MMHS150K images or using dropout to force the multimodal models to use the visual information. Eventually, though, these models end up using almost only the text input for the prediction and producing very similar results to those of the textual models. The proposed multimodal models, such as TKM, have shown good performance in other tasks, such as VQA. Next, we analyze why they do not perform well in this task and with this data:
[noitemsep,leftmargin=*]
Noisy data. A major challenge of this task is the discrepancy between annotations due to subjective judgement. Although this affects also detection using only text, its repercussion is bigger in more complex tasks, such as detection using images or multimodal detection.
Complexity and diversity of multimodal relations. Hate speech multimodal publications employ a lot of background knowledge which makes the relations between visual and textual elements they use very complex and diverse, and therefore difficult to learn by a neural network.
Small set of multimodal examples. Fig. FIGREF5 shows some of the challenging multimodal hate examples that we aimed to detect. But although we have collected a big dataset of $150K$ tweets, the subset of multimodal hate there is still too small to learn the complex multimodal relations needed to identify multimodal hate.
<<</Results>>>
<<<Conclusions>>>
In this work we have explored the task of hate speech detection on multimodal publications. We have created MMHS150K, to our knowledge the biggest available hate speech dataset, and the first one composed of multimodal data, namely tweets formed by image and text. We have trained different textual, visual and multimodal models with that data, and found out that, despite the fact that images are useful for hate speech detection, the multimodal models do not outperform the textual models. Finally, we have analyzed the challenges of the proposed task and dataset. Given that most of the content in Social Media nowadays is multimodal, we truly believe on the importance of pushing forward this research. The code used in this work is available in .
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nHate Speech Detection\nVisual and Textual Data Fusion\nThe MMHS150K dataset\nTweets Gathering\nTextual Image Filtering\nAnnotation\nMethodology\nUnimodal Treatment\nImages.\nTweet Text.\nImage Text.\nMultimodal Architectures\nFeature Concatenation Model (FCM)\nSpatial Concatenation Model (SCM)\nTextual Kernels Model (TKM)\nTraining\nResults\nConclusions"
],
"type": "outline"
}
|
1912.00871
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Solving Arithmetic Word Problems Automatically Using Transformer and Unambiguous Representations
<<<Abstract>>>
Constructing accurate and automatic solvers of math word problems has proven to be quite challenging. Prior attempts using machine learning have been trained on corpora specific to math word problems to produce arithmetic expressions in infix notation before answer computation. We find that custom-built neural networks have struggled to generalize well. This paper outlines the use of Transformer networks trained to translate math word problems to equivalent arithmetic expressions in infix, prefix, and postfix notations. In addition to training directly on domain-specific corpora, we use an approach that pre-trains on a general text corpus to provide foundational language abilities to explore if it improves performance. We compare results produced by a large number of neural configurations and find that most configurations outperform previously reported approaches on three of four datasets with significant increases in accuracy of over 20 percentage points. The best neural approaches boost accuracy by almost 10% on average when compared to the previous state of the art.
<<</Abstract>>>
<<<Introduction>>>
Students are exposed to simple arithmetic word problems starting in elementary school, and most become proficient in solving them at a young age. Automatic solvers of such problems could potentially help educators, as well as become an integral part of general question answering services. However, it has been challenging to write programs to solve even such elementary school level problems well.
Solving a math word problem (MWP) starts with one or more sentences describing a transactional situation to be understood. The sentences are processed to produce an arithmetic expression, which is evaluated to provide an answer. Recent neural approaches to solving arithmetic word problems have used various flavors of recurrent neural networks (RNN) as well as reinforcement learning. Such methods have had difficulty achieving a high level of generalization. Often, systems extract the relevant numbers successfully but misplace them in the generated expressions. More problematic, they get the arithmetic operations wrong. The use of infix notation also requires pairs of parentheses to be placed and balanced correctly, bracketing the right numbers. There have been problems with parentheses placement as well.
Correctly extracting the numbers in the problem is necessary. Figure FIGREF1 gives examples of some infix representations that a machine learning solver can potentially produce from a simple word problem using the correct numbers. Of the expressions shown, only the first one is correct. After carefully observing expressions that actual problem solvers have generated, we want to explore if the use of infix notation may itself be a part of the problem because it requires the generation of additional characters, the open and close parentheses, which must be balanced and placed correctly.
The actual numbers appearing in MWPs vary widely from problem to problem. Real numbers take any conceivable value, making it almost impossible for a neural network to learn representations for them. As a result, trained programs sometimes generate expressions that have seemingly random numbers. For example, in some runs, a trained program could generate a potentially inexplicable expression such as $(25.01 - 4) * 9$ for the problem given in Figure FIGREF1, with one or more numbers not in the problem sentences. We hypothesize that replacing the numbers in the problem statement with generic tags like $\rm \langle n1 \rangle $, $\rm \langle n2 \rangle $, and $\rm \langle n3 \rangle $ and saving their values as a pre-processing step, does not take away from the generality of the solution, but suppresses the problem of fertility in number generation leading to the introduction of numbers not present in the question sentences.
Another idea we want to test is whether a neural network which has been pre-trained to acquire language knowledge is better able to “understand" the problem sentences. Pre-training with a large amount of arithmetic-related text is likely to help develop such knowledge, but due to the lack of large such focused corpora, we want to test whether pre-training with a sufficient general corpus is beneficial.
In this paper, we use the Transformer model BIBREF0 to solve arithmetic word problems as a particular case of machine translation from text to the language of arithmetic expressions. Transformers in various configurations have become a staple of NLP in the past two years. Past neural approaches did not treat this problem as pure translation like we do, and additionally, these approaches usually augmented the neural architectures with various external modules such as parse trees or used deep reinforcement learning, which we do not do. In this paper, we demonstrate that Transformers can be used to solve MWPs successfully with the simple adjustments we describe above. We compare performance on four individual datasets. In particular, we show that our translation-based approach outperforms state-of-the-art results reported by BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5 by a large margin on three of four datasets tested. On average, our best neural architecture outperforms previous results by almost 10%, although our approach is conceptually more straightforward.
We organize our paper as follows. The second section presents related work. Then, we discuss our approach. We follow by an analysis of experimental results and compare them to those of other recent approaches. We also discuss our successes and shortcomings. Finally, we share our concluding thoughts and end with our direction for future work.
<<</Introduction>>>
<<<Related Work>>>
Past strategies have used rules and templates to match sentences to arithmetic expressions. Some such approaches seemed to solve problems impressively within a narrow domain, but performed poorly when out of domain, lacking generality BIBREF6, BIBREF7, BIBREF8, BIBREF9. Kushman et al. BIBREF3 used feature extraction and template-based categorization by representing equations as expression forests and finding a near match. Such methods required human intervention in the form of feature engineering and development of templates and rules, which is not desirable for expandability and adaptability. Hosseini et al. BIBREF2 performed statistical similarity analysis to obtain acceptable results, but did not perform well with texts that were dissimilar to training examples.
Existing approaches have used various forms of auxiliary information. Hosseini et al. BIBREF2 used verb categorization to identify important mathematical cues and contexts. Mitra and Baral BIBREF10 used predefined formulas to assist in matching. Koncel-Kedziorski et al. BIBREF11 parsed the input sentences, enumerated all parses, and learned to match, requiring expensive computations. Roy and Roth BIBREF12 performed searches for semantic trees over large spaces.
Some recent approaches have transitioned to using neural networks. Semantic parsing takes advantage of RNN architectures to parse MWPs directly into equations or expressions in a math-specific language BIBREF9, BIBREF13. RNNs have shown promising results, but they have had difficulties balancing parenthesis, and also, sometimes incorrectly choose numbers when generating equations. Rehman et al. BIBREF14 used POS tagging and classification of equation templates to produce systems of equations from third-grade level MWPs. Most recently, Sun et al. BIBREF13 used a Bi-Directional LSTM architecture for math word problems. Huang et al. BIBREF15 used a deep reinforcement learning model to achieve character placement in both seen and novel equation templates. Wang et al. BIBREF1 also used deep reinforcement learning.
<<</Related Work>>>
<<<Approach>>>
We view math word problem solving as a sequence-to-sequence translation problem. RNNs have excelled in sequence-to-sequence problems such as translation and question answering. The recent introduction of attention mechanisms has improved the performance of RNN models. Vaswani et al. BIBREF0 introduced the Transformer network, which uses stacks of attention layers instead of recurrence. Applications of Transformers have achieved state-of-the-art performance in many NLP tasks. We use this architecture to produce character sequences that are arithmetic expressions. The models we experiment with are easy and efficient to train, allowing us to test several configurations for a comprehensive comparison. We use several configurations of Transformer networks to learn the prefix, postfix, and infix notations of MWP equations independently.
Prefix and postfix representations of equations do not contain parentheses, which has been a source of confusion in some approaches. If the learned target sequences are simple, with fewer characters to generate, it is less likely to make mistakes during generation. Simple targets also may help the learning of the model to be more robust. Experimenting with all three representations for equivalent expressions may help us discover which one works best.
We train on standard datasets, which are readily available and commonly used. Our method considers the translation of English text to simple algebraic expressions. After performing experiments by training directly on math word problem corpora, we perform a different set of experiments by pre-training on a general language corpus. The success of pre-trained models such as ELMo BIBREF16, GPT-2 BIBREF17, and BERT BIBREF18 for many natural language tasks, provides reasoning that pre-training is likely to produce better learning by our system. We use pre-training so that the system has some foundational knowledge of English before we train it on the domain-specific text of math word problems. However, the output is not natural language but algebraic expressions, which is likely to limit the effectiveness of such pre-training.
<<<Data>>>
We work with four individual datasets. The datasets contain addition, subtraction, multiplication, and division word problems.
AI2 BIBREF2. AI2 is a collection of 395 addition and subtraction problems, containing numeric values, where some may not be relevant to the question.
CC BIBREF19. The Common Core dataset contains 600 2-step questions. The Cognitive Computation Group at the University of Pennsylvania gathered these questions.
IL BIBREF4. The Illinois dataset contains 562 1-step algebra word questions. The Cognitive Computation Group compiled these questions also.
MAWPS BIBREF20. MAWPS is a relatively large collection, primarily from other MWP datasets. We use 2,373 of 3,915 MWPs from this set. The problems not used were more complex problems that generate systems of equations. We exclude such problems because generating systems of equations is not our focus.
We take a randomly sampled 95% of examples from each dataset for training. From each dataset, MWPs not included in training make up the testing data used when generating our results. Training and testing are repeated three times, and reported results are an average of the three outcomes.
<<</Data>>>
<<<Representation Conversion>>>
We take a simple approach to convert infix expressions found in the MWPs to the other two representations. Two stacks are filled by iterating through string characters, one with operators found in the equation and the other with the operands. From these stacks, we form a binary tree structure. Traversing an expression tree in pre-order results in a prefix conversion. Post-order traversal gives us a postfix expression. Three versions of our training and testing data are created to correspond to each type of expression. By training on different representations, we expect our test results to change.
<<</Representation Conversion>>>
<<<Pre-training>>>
We pre-train half of our networks to endow them with a foundational knowledge of English. Pre-training models on significant-sized language corpora have been a common approach recently. We explore the pre-training approach using a general English corpus because the language of MWPs is regular English, interspersed with numerical values. Ideally, the corpus for pre-training should be a very general and comprehensive corpus like an English Wikipedia dump or many gigabytes of human-generated text scraped from the internet like GPT-2 BIBREF21 used. However, in this paper, we want to perform experiments to see if pre-training with a smaller corpus can help. In particular, for this task, we use the IMDb Movie Reviews dataset BIBREF22. This set contains 314,041 unique sentences. Since movie reviewers wrote this data, it is a reference to natural language not related to arithmetic. Training on a much bigger and general corpus may make the language model stronger, but we leave this for future work.
We compare pre-trained models to non-pre-trained models to observe performance differences. Our pre-trained models are trained in an unsupervised fashion to improve the encodings of our fine-tuned solvers. In the pre-training process, we use sentences from the IMDb reviews with a target output of an empty string. We leave the input unlabelled, which focuses the network on adjusting encodings while providing unbiased decoding when we later change from IMDb English text to MWP-Data.
<<</Pre-training>>>
<<<Method: Training and Testing>>>
The input sequence is a natural language specification of an arithmetic word problem. The MWP questions and equations have been encoded using the subword text encoder provided by the TensorFlow Datasets library. The output is an expression in prefix, infix, or postfix notation, which then can be manipulated further and solved to obtain a final answer.
All examples in the datasets contain numbers, some of which are unique or rare in the corpus. Rare terms are adverse for generalization since the network is unlikely to form good representations for them. As a remedy to this issue, our networks do not consider any relevant numbers during training. Before the networks attempt any translation, we pre-process each question and expression by a number mapping algorithm. This algorithm replaces each numeric value with a corresponding identifier (e.g., $\langle n1 \rangle $, $\langle n2 \rangle $, etc.), and remembers the necessary mapping. We expect that this approach may significantly improve how networks interpret each question. When translating, the numbers in the original question are tagged and cached. From the encoded English and tags, a predicted sequence resembling an expression presents itself as output. Since each network's learned output resembles an arithmetic expression (e.g., $\langle n1 \rangle + \langle n2 \rangle * \langle n3 \rangle $), we use the cached tag mapping to replace the tags with the corresponding numbers and return a final mathematical expression.
Three representation models are trained and tested separately: Prefix-Transformer, Postfix-Transformer, and Infix-Transformer. For each experiment, we use representation-specific Transformer architectures. Each model uses the Adam optimizer with $beta_1=0.95$ and $beta_2=0.99$ with a standard epsilon of $1 \times e^{-9}$. The learning rate is reduced automatically in each training session as the loss decreases. Throughout the training, each model respects a 10% dropout rate. We employ a batch size of 128 for all training. Each model is trained on MWP data for 300 iterations before testing. The networks are trained on a machine using 1 Nvidia 1080 Ti graphics processing unit (GPU).
We compare medium-sized, small, and minimal networks to show if network size can be reduced to increase training and testing efficiency while retaining high accuracy. Networks over six layers have shown to be non-effective for this task. We tried many configurations of our network models, but report results with only three configurations of Transformers.
Transformer Type 1: This network is a small to medium-sized network consisting of 4 Transformer layers. Each layer utilizes 8 attention heads with a depth of 512 and a feed-forward depth of 1024.
Transformer Type 2: The second model is small in size, using 2 Transformer layers. The layers utilize 8 attention heads with a depth of 256 and a feed-forward depth of 1024.
Transformer Type 3: The third type of model is minimal, using only 1 Transformer layer. This network utilizes 8 attention heads with a depth of 256 and a feed-forward depth of 512.
<<<Objective Function>>>
We calculate the loss in training according to a mean of the sparse categorical cross-entropy formula. Sparse categorical cross-entropy BIBREF23 is used for identifying classes from a feature set, which assumes a large target classification set. Evaluation between the possible translation classes (all vocabulary subword tokens) and the produced class (predicted token) is the metric of performance here. During each evaluation, target terms are masked, predicted, and then compared to the masked (known) value. We adjust the model's loss according to the mean of the translation accuracy after predicting every determined subword in a translation.
where $K = |Translation \; Classes|$, $J = |Translation|$, and $I$ is the number of examples.
<<</Objective Function>>>
<<<Experiment 1: Representation>>>
Some of the problems encountered by prior approaches seem to be attributable to the use of infix notation. In this experiment, we compare translation BLEU-2 scores to spot the differences in representation interpretability. Traditionally, a BLEU score is a metric of translation quality BIBREF24. Our presented BLEU scores represent an average of scores a given model received over each of the target test sets. We use a standard bi-gram weight to show how accurate translations are within a window of two adjacent terms. After testing translations, we calculate an average BLEU-2 score per test set, which is related to the success over that data. An average of the scores for each dataset become the presented value.
where $N$ is the number of test datasets, which is 4.
<<</Experiment 1: Representation>>>
<<<Experiment 2: State-of-the-art>>>
This experiment compares our networks to recent previous work. We count a given test score by a simple “correct versus incorrect" method. The answer to an expression directly ties to all of the translation terms being correct, which is why we do not consider partial precision. We compare average accuracies over 3 test trials on different randomly sampled test sets from each MWP dataset. This calculation more accurately depicts the generalization of our networks.
<<</Experiment 2: State-of-the-art>>>
<<<Effect of Pre-training>>>
We also explore the effect of language pre-training, as discussed earlier. This training occurs over 30 iterations, at the start of the two experiments, to introduce a good level of language understanding before training on the MWP data. The same Transformer architectures are also trained solely on the MWP data. We calculate the reported results as:
where $R$ is the number of test repetitions, which is 3; $N$ is the number of test datasets, which is 4; $P$ is the number of MWPs, and $C$ is the number of correct equation translations.
<<</Effect of Pre-training>>>
<<</Method: Training and Testing>>>
<<</Approach>>>
<<<Results>>>
We now present the results of our various experiments. We compare the three representations of target equations and three architectures of the Transformer model in each test.
Results of Experiment 1 are given in Table TABREF21. For clarity, the number in parentheses in front of a row is the Transformer type. By using BLEU scores, we assess the translation capability of each network. This test displays how networks transform different math representations to a character summary level.
We compare by average BLEU-2 accuracy among our tests in the Average column of Table TABREF21 to communicate these translation differences. To make it easier to understand the results, Table TABREF22 provides a summary of Table TABREF21.
Looking at Tables TABREF21 and TABREF22, we note that both the prefix and postfix representations of our target language perform better than the generally used infix notation. The non-pre-trained models perform slightly better than the pre-trained models, and the small or Type 2 models perform slightly better than the minimal-sized and medium-sized Transformer models. The non-pre-trained type 2 prefix Transformer arrangement produced the most consistent translations.
Table TABREF23 provides detailed results of Experiment 2. The numbers are absolute accuracies, i.e., they correspond to cases where the arithmetic expression generated is 100% correct, leading to the correct numeric answer. Results by BIBREF1, BIBREF2, BIBREF4, BIBREF5 are sparse but indicate the scale of success compared to recent past approaches. Prefix, postfix, and infix representations in Table TABREF23 show that network capabilities are changed by how teachable the target data is. The values in the last column of Table TABREF23 are summarized in Table TABREF24. How the models compare with respect to accuracy closely resembles the comparison of BLEU scores, presented earlier. Thus, BLEU scores seem to correlate well with accuracy values in our case.
While our networks fell short of BIBREF1 AI2 testing accuracy, we present state-of-the-art results for the remaining three datasets. The AI2 dataset is tricky because it has numeric values in the word descriptions that are extraneous or irrelevant to the actual computation, whereas the other datasets have only relevant numeric values. The type 2 postfix Transformer received the highest testing average of 87.2%.
Our attempt at language pre-training fell short of our expectations in all but one tested dataset. We had hoped that more stable language understanding would improve results in general. As previously mentioned, using more general and comprehensive corpora of language could help grow semantic ability.
<<<Analysis>>>
All of the network configurations used were very successful for our task. The prefix representation overall provides the most stable network performance. To display the capability of our most successful model (type 2 postfix Transformer), we present some outputs of the network in Figure FIGREF26.
The models respect the syntax of math expressions, even when incorrect. For the majority of questions, our translators were able to determine operators based solely on the context of language.
Our pre-training was unsuccessful in improving accuracy, even when applied to networks larger than those reported. We may need to use more inclusive language, or pre-train on very math specific texts to be successful. Our results support our thesis of infix limitation.
<<<Error Analysis>>>
Our system, while performing above standard, could still benefit from some improvements. One issue originates from the algorithmic pre-processing of our questions and expressions. In Figure FIGREF27 we show an example of one such issue. The excerpt comes from a type 3 non-pre-trained Transformer test. The example shows an overlooked identifier, $\langle n1 \rangle $. The issue is attributed to the identifier algorithm only considering numbers in the problem. Observe in the question that the word “eight" is the number we expect to relate to $\langle n2 \rangle $. Our identifying algorithm could be improved by considering such number words and performing conversion to a numerical value. If our algorithm performed as expected, the identifier $\langle n1 \rangle $ relates with 4 (the first occurring number in the question) and $\langle n2 \rangle $ with 8 (the converted number word appearing second in the question). The overall translation was incorrect whether or not our algorithm was successful, but it is essential to analyze problems like these that may result in future improvements. Had all questions been tagged correctly, our performance would have likely improved.
<<</Error Analysis>>>
<<</Analysis>>>
<<</Results>>>
<<<Conclusions and Future Work>>>
In this paper, we have shown that the use of Transformer networks improves automatic math word problem-solving. We have also shown that the use of postfix target expressions performs better than the other two expression formats. Our improvements are well-motivated but straightforward and easy to use, demonstrating that the well-acclaimed Transformer architecture for language processing can handle MWPs well, obviating the need to build specialized neural architectures for this task.
Extensive pre-training over much larger corpora of language has extended the capabilities of many neural approaches. For example, networks like BERT BIBREF18, trained extensively on data from Wikipedia, perform relatively better in many tasks. Pre-training on a much larger corpus remains an extension we would like to try.
We want to work with more complex MWP datasets. Our datasets contain basic arithmetic expressions of +, -, * and /, and only up to 3 of them. For example, datasets such as Dolphin18k BIBREF25, consisting of web-answered questions from Yahoo! Answers, require a wider variety of arithmetic operators to be understood by the system.
We have noticed that the presence of irrelevant numbers in the sentences for MWPs limits our performance. We can think of such numbers as a sort of adversarial threat to an MWP solver that stress-test it. It may be interesting to explore how to keep a network's performance high, even in such cases.
With a hope to further advance this area of research and heighten interests, all of the code and data used is available on GitHub.
<<</Conclusions and Future Work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nApproach\nData\nRepresentation Conversion\nPre-training\nMethod: Training and Testing\nObjective Function\nExperiment 1: Representation\nExperiment 2: State-of-the-art\nEffect of Pre-training\nResults\nAnalysis\nError Analysis\nConclusions and Future Work"
],
"type": "outline"
}
|
1911.11750
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
A Measure of Similarity in Textual Data Using Spearman's Rank Correlation Coefficient
<<<Abstract>>>
In the last decade, many diverse advances have occurred in the field of information extraction from data. Information extraction in its simplest form takes place in computing environments, where structured data can be extracted through a series of queries. The continuous expansion of quantities of data have therefore provided an opportunity for knowledge extraction (KE) from a textual document (TD). A typical problem of this kind is the extraction of common characteristics and knowledge from a group of TDs, with the possibility to group such similar TDs in a process known as clustering. In this paper we present a technique for such KE among a group of TDs related to the common characteristics and meaning of their content. Our technique is based on the Spearman's Rank Correlation Coefficient (SRCC), for which the conducted experiments have proven to be comprehensive measure to achieve a high-quality KE.
<<</Abstract>>>
<<<Introduction>>>
Over the past few years, the term big data has become an important key point for research into data mining and information retrieval. Through the years, the quantity of data managed across enterprises has evolved from a simple and imperceptible task to an extent to which it has become the central performance improvement problem. In other words, it evolved to be the next frontier for innovation, competition and productivity BIBREF0. Extracting knowledge from data is now a very competitive environment. Many companies process vast amounts of customer/user data in order to improve the quality of experience (QoE) of their customers. For instance, a typical use-case scenario would be a book seller that performs an automatic extraction of the content of the books a customer has bought, and subsequently extracts knowledge of what customers prefer to read. The knowledge extracted could then be used to recommend other books. Book recommending systems are typical examples where data mining techniques should be considered as the primary tool for making future decisions BIBREF1.
KE from TDs is an essential field of research in data mining and it certainly requires techniques that are reliable and accurate in order to neutralize (or even eliminate) uncertainty in future decisions. Grouping TDs based on their content and mutual key information is referred to as clustering. Clustering is mostly performed with respect to a measure of similarity between TDs, which must be represented as vectors in a vector space beforehand BIBREF2. News aggregation engines can be considered as a typical representative where such techniques are extensively applied as a sub-field of natural language processing (NLP).
In this paper we present a new technique for measuring similarity between TDs, represented in a vector space, based on SRCC - "a statistical measure of association between two things" BIBREF3, which in this case things refer to TDs. The mathematical properties of SRCC (such as the ability to detect nonlinear correlation) make it compelling to be researched into. Our motivation is to provide a new technique of improving the quality of KE based on the well-known association measure SRCC, as opposed to other well-known TD similarity measures.
The paper is organized as follows: Section SECREF2 gives a brief overview of the vector space representation of a TD and the corresponding similarity measures, in Section SECREF3 we address conducted research of the role of SRCC in data mining and trend prediction. Section SECREF4 is a detailed description of the proposed technique, and later, in Section SECREF5 we present clustering and classification experiments conducted on several sets of TDs, while Section SECREF6 summarizes our research and contribution to the broad area of statistical text analysis.
<<</Introduction>>>
<<<Background>>>
In this section we provide a brief background of vector space representation of TDs and existing similarity measures that have been widely used in statistical text analysis. To begin with, we consider the representation of documents.
<<<Document Representation>>>
A document $d$ can be defined as a finite sequence of terms (independent textual entities within a document, for example, words), namely $d=(t_1,t_2,\dots ,t_n)$. A general idea is to associate weight to each term $t_i$ within $d$, such that
which has proven superior in prior extensive research BIBREF4. The most common weight measure is Term Frequency - Inverse Document Frequency (TF-IDF). TF is the frequency of a term within a single document, and IDF represents the importance, or uniqueness of a term within a set of documents $D=\lbrace d_1, d_2, \dots ,d_m\rbrace $. TF-IDF is defined as follows:
where
such that $f$ is the number of occurrences of $t$ in $d$ and $\log $ is used to avoid very small values close to zero.
Having these measures defined, it becomes obvious that each $w_i$, for $i=1,\dots ,n$ is assigned the TF-IDF value of the corresponding term. It turns out that each document is represented as a vector of TF-IDF weights within a vector space model (VSM) with its properties BIBREF5.
<<</Document Representation>>>
<<<Measures of Similarity>>>
Different ways of computing the similarity of two vector exist. There are two main approaches in similarity computation:
Deterministic - similarity measures exploiting algebraic properties of vectors and their geometrical interpretation. These include, for instance, cosine similarity (CS), Jaccard coefficients (for binary representations), etc.
Stochastic - similarity measures in which uncertainty is taken into account. These include, for instance, statistics such as Pearson's Correlation Coefficient (PCC) BIBREF6.
Let $\mathbf {u}$ and $\mathbf {v}$ be the vector representations of two documents $d_1$ and $d_2$. Cosine similarity simply measures $cos\theta $, where $\theta $ is the angle between $\mathbf {u}$ and $\mathbf {v}$
(cosine similarity)
(PCC)
where
All of the above measures are widely used and have proven efficient, but an important aspect is the lack of importance of the order of terms in textual data. It is easy for one to conclude that, two documents containing a single sentence each, but in a reverse order of terms, most deterministic methods fail to express that these are actually very similar. On the other hand, PCC detects only linear correlation, which constraints the diversity present in textual data. In the following section, we study relevant research in solving this problem, and then in Sections SECREF4 and SECREF5 we present our solution and results.
<<</Measures of Similarity>>>
<<</Background>>>
<<<Related Work>>>
A significant number of similarity measures have been proposed and this topic has been thoroughly elaborated. Its main application is considered to be clustering and classification of textual data organized in TDs. In this section, we provide an overview of relevant research on this topic, to which we can later compare our proposed technique for computing vector similarity.
KE (also referred to as knowledge discovery) techniques are used to extract information from unstructured data, which can be subsequently used for applying supervised or unsupervised learning techniques, such as clustering and classification of the content BIBREF7. Text clustering should address several challenges such as vast amounts of data, very high dimensionality of more than 10,000 terms (dimensions), and most importantly - an understandable description of the clusters BIBREF8, which essentially implies the demand for high quality of extracted information.
Regarding high quality KE and information accuracy, much effort has been put into improving similarity measurements. An improvement based on linear algebra, known as Singular Value Decomposition (SVD), is oriented towards word similarity, but instead, its main application is document similarity BIBREF9. Alluring is the fact that this measure takes the advantage of synonym recognition and has been used to achieve human-level scores on multiple-choice synonym questions from the Test of English as a Foreign Language (TOEFL) in a technique known as Latent Semantic Analysis (LSA) BIBREF10 BIBREF5.
Other semantic term similarity measures have been also proposed, based on information exclusively derived from large corpora of words, such as Pointwise Mutual Information (PMI), which has been reported to have achieved a large degree of correctness in the synonym questions in the TOEFL and SAT tests BIBREF11.
Moreover, normalized knowledge-based measures, such as Leacock & Chodrow BIBREF12, Lesk ("how to tell a pine cone from an ice-cream cone" BIBREF13, or measures for the depth of two concepts (preferably vebs) in the Word-Net taxonomy BIBREF14 have experimentally proven to be efficient. Their accuracy converges to approximately 69%, Leacock & Chodrow and Lesk have showed the highest precision, and having them combined turns out to be the approximately optimal solution BIBREF11.
<<</Related Work>>>
<<<The Spearman's Rank Correlation Coefficient Similarity Measure>>>
The main idea behind our proposed technique is to introduce uncertainty in the calculations of the similarity between TDs represented in a vector space model, based on the nonlinear properties of SRCC. Unlike PCC, which is only able to detect linear correlation, SRCC's nonlinear ability provides a convenient way of taking different ordering of terms into account.
<<<Spearman's Rank Correlation Coefficient>>>
The Spreaman's Rank Correlation Coefficient BIBREF3, denoted $\rho $, has a from which is very similar to PCC. Namely, for $n$ raw scores $U_i, V_i$ for $i=1,\dots ,n$ denoting TF-IDF values for two document vectors $\mathbf {U}, \mathbf {V}$,
where $u_i$ and $v_i$ are the corresponding ranks of $U_i$ and $V_i$, for $i=0,\dots ,n-1$. A metric to assign the ranks of each of the TF-IDF values has to be determined beforehand. Each $U_i$ is assigned a rank value $u_i$, such that $u_i=0,1,\dots ,n-1$. It is important to note that the metric by which the TF-IDF values are ranked is essentially their sorting criteria. A convenient way of determining this criteria when dealing with TF-IDF values, which emphasize the importance of a term within a TD set, is to sort these values in an ascending order. Thus, the largest (or most important) TF-IDF value within a TD vector is assigned the rank value of $n-1$, and the least important is assigned a value of 0.
<<<An Illustration of the Ranking TF-IDF Vectors>>>
Consider two TDs $d_1$ and $d_2$, each containing a single sentence.
Document 1: John had asked Mary to marry him before she left.
Document 2: Before she left, Mary was asked by John to be his wife.
Now consider these sentences lemmatized:
Document 1: John have ask Mary marry before leave.
Document 2: Before leave Mary ask John his wife.
Let us now represent $d_1$ and $d_2$ as TF-IDF vectors for the vocabulary in our small corpus.
The results in Table TABREF7 show that SRCC performs much better in knowledge extraction. The two documents' contents contain the same idea expressed by terms in a different order that John had asked Mary to marry him before she left. It is obvious that cosine similarity cannot recognize this association, but SRCC has successfully recognized it and produced a similarity value of -0.285714.
SRCC is essentially conducive to semantic similarity. Rising the importance of a term in a TD will eventually rise its importance in another TD. But if the two TDs are of different size, the terms' importance values will also differ, by which a nonlinear association will emerge. This association will not be recognized by PCC at all (as it only detects linear association), but SRCC will definitely catch this detail and produce the desirable similarity value. The idea is to use SRCC to catch such terms which drive the semantic context of a TD, which will follow a nonlinear and lie on a polynomial curve, and not on the line $x=y$.
In our approach, we use a non-standard measure of similarity in textual data with simple and common frequency values, such as TF-IDF, in contrast to the statement that simple frequencies are not enough for high-quality knowledge extraction BIBREF5. In the next section, we will present our experiments and discuss the results we have obtained.
<<</An Illustration of the Ranking TF-IDF Vectors>>>
<<</Spearman's Rank Correlation Coefficient>>>
<<</The Spearman's Rank Correlation Coefficient Similarity Measure>>>
<<<Experiments>>>
In order to test our proposed approach, we have conducted a series of experiments. In this section, we briefly discuss the outcome and provide a clear view of whether our approach is suitable for knowledge extraction from textual data in a semantic context.
We have used a dataset of 14 TDs to conduct our experiments. There are several subjects on which their content is based: (aliens, stories, law, news) BIBREF15.
<<<Comparison Between Similarity Measures>>>
In this part, we have compared the similarity values produced by each of the similarity measures CS, SRCC and PCC. We have picked a few notable results and they are summarized in Table TABREF9 below.
In Table TABREF9 that SRCC mostly differs from CS and PCC, which also differ in some cases.For instance, $d_1$ refers to leadership in the nineties, while $d_5$ refers to the family and medical lead act of 1993. We have empirically observed that the general topics discussed in these two textual documents are very different. Namely, discusses different frameworks for leadership empowerment, while $d_5$ discusses medical treatment and self-care of employees. We have observed that the term employee is the only connection between $d_1$ and $d_5$. The similarity value of CS of 0.36 is very unreal in this case, while PCC (0.05), and especially SRCC (0.0018) provide a much more realistic view of the semantic knowledge aggregated in these documents. Another example are $d_8$ and $d_9$. The contents of these documents are very straightforward and very similar, because they discuss aliens seen by Boeing-747 pilots and $d_9$ discusses angels that were considered to be aliens. It is obvious that SRCC is able to detect this association as good as CS and PCC which are very good in such straightforward cases.
We have observed that SRCC does not perform worse than any other of these similarity measures. It does not always produce the most suitable similarity value, but it indeed does perform at least equally good as other measures. The values in Table TABREF9 are very small, and suggest that SRCC performs well in extracting tiny associations in such cases. It is mostly a few times larger than CS and PCC when there actually exist associations between the documents.
These results are visually summarized in Figure FIGREF10. The two above-described examples can be clearly seen as standing out.
<<</Comparison Between Similarity Measures>>>
<<<Non-linearity of Documents>>>
In this part we will briefly present the nonlinear association between some of the TDs we have used in our experiments. Our purpose is to point out that $(d_6,d_{10})$ and $(d_7,d_{12})$ are the pairs where SRCC is the most appropriate measure for the observed content, and as such, it is able to detect the nonlinear association between them. This can be seen in Figure FIGREF12 below. The straightforward case of $d_8$ and $d_9$ also stands out here (SRCC can also detect it very well).
The obtained results showed that our technique shows good performance on similarity computing, although it is not a perfect measure. But, it sure comes close to convenient and widely used similarity measures such as CS and PCC. The next section provides a conclusion of our research and suggestions for further work.
<<</Non-linearity of Documents>>>
<<</Experiments>>>
<<<Conclusion and Future Work>>>
In this paper we have presented a non-standard technique for computing the similarity between TF-IDF vectors. We have propagated our idea and contributed a portion of new knowledge in this field of text analysis. We have proposed a technique that is widely used in similar fields, and our goal is to provide starting information to other researches in this area. We consider our observations promising and they should be extensively researched.
Our experiments have proved that our technique should be a subject for further research. Our future work will concentrate on the implementation of machine learning techniques, such as clustering and subsequent classification of textual data. We expect an information of good quality to be extracted. To summarize, the rapidly emerging area of big data and information retrieval is where our technique should reside and where it should be applied.
<<</Conclusion and Future Work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nBackground\nDocument Representation\nMeasures of Similarity\nRelated Work\nThe Spearman's Rank Correlation Coefficient Similarity Measure\nSpearman's Rank Correlation Coefficient\nAn Illustration of the Ranking TF-IDF Vectors\nExperiments\nComparison Between Similarity Measures\nNon-linearity of Documents\nConclusion and Future Work"
],
"type": "outline"
}
|
1911.03894
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
CamemBERT: a Tasty French Language Model
<<<Abstract>>>
Pretrained language models are now ubiquitous in Natural Language Processing. Despite their success, most available models have either been trained on English data or on the concatenation of data in multiple languages. This makes practical use of such models—in all languages except English—very limited. Aiming to address this issue for French, we release CamemBERT, a French version of the Bi-directional Encoders for Transformers (BERT). We measure the performance of CamemBERT compared to multilingual models in multiple downstream tasks, namely part-of-speech tagging, dependency parsing, named-entity recognition, and natural language inference. CamemBERT improves the state of the art for most of the tasks considered. We release the pretrained model for CamemBERT hoping to foster research and downstream applications for French NLP.
<<</Abstract>>>
<<<Introduction>>>
Pretrained word representations have a long history in Natural Language Processing (NLP), from non-neural methods BIBREF0, BIBREF1, BIBREF2 to neural word embeddings BIBREF3, BIBREF4 and to contextualised representations BIBREF5, BIBREF6. Approaches shifted more recently from using these representations as an input to task-specific architectures to replacing these architectures with large pretrained language models. These models are then fine-tuned to the task at hand with large improvements in performance over a wide range of tasks BIBREF7, BIBREF8, BIBREF9, BIBREF10.
These transfer learning methods exhibit clear advantages over more traditional task-specific approaches, probably the most important being that they can be trained in an unsupervised manner. They nevertheless come with implementation challenges, namely the amount of data and computational resources needed for pretraining that can reach hundreds of gigabytes of uncompressed text and require hundreds of GPUs BIBREF11, BIBREF9. The latest transformer architecture has gone uses as much as 750GB of plain text and 1024 TPU v3 for pretraining BIBREF10. This has limited the availability of these state-of-the-art models to the English language, at least in the monolingual setting. Even though multilingual models give remarkable results, they are often larger and their results still lag behind their monolingual counterparts BIBREF12. This is particularly inconvenient as it hinders their practical use in NLP systems as well as the investigation of their language modeling capacity, something that remains to be investigated in the case of, for instance, morphologically rich languages.
We take advantage of the newly available multilingual corpus OSCAR BIBREF13 and train a monolingual language model for French using the RoBERTa architecture. We pretrain the model - which we dub CamemBERT- and evaluate it in four different downstream tasks for French: part-of-speech (POS) tagging, dependency parsing, named entity recognition (NER) and natural language inference (NLI). CamemBERT improves the state of the art for most tasks over previous monolingual and multilingual approaches, which confirms the effectiveness of large pretrained language models for French.
We summarise our contributions as follows:
We train a monolingual BERT model on the French language using recent large-scale corpora.
We evaluate our model on four downstream tasks (POS tagging, dependency parsing, NER and natural language inference (NLI)), achieving state-of-the-art results in most tasks, confirming the effectiveness of large BERT-based models for French.
We release our model in a user-friendly format for popular open-source libraries so that it can serve as a strong baseline for future research and be useful for French NLP practitioners.
<<</Introduction>>>
<<<Related Work>>>
<<<From non-contextual to contextual word embeddings>>>
The first neural word vector representations were non-contextualised word embeddings, most notably word2vec BIBREF3, GloVe BIBREF4 and fastText BIBREF14, which were designed to be used as input to task-specific neural architectures. Contextualised word representations such as ELMo BIBREF5 and flair BIBREF6, improved the expressivity of word embeddings by taking context into account. They improved the performance of downstream tasks when they replaced traditional word representations. This paved the way towards larger contextualised models that replaced downstream architectures in most tasks. These approaches, trained with language modeling objectives, range from LSTM-based architectures such as ULMFiT BIBREF15 to the successful transformer-based architectures such as GPT2 BIBREF8, BERT BIBREF7, RoBERTa BIBREF9 and more recently ALBERT BIBREF16 and T5 BIBREF10.
<<</From non-contextual to contextual word embeddings>>>
<<<Non-contextual word embeddings for languages other than English>>>
Since the introduction of word2vec BIBREF3, many attempts have been made to create monolingual models for a wide range of languages. For non-contextual word embeddings, the first two attempts were by BIBREF17 and BIBREF18 who created word embeddings for a large number of languages using Wikipedia. Later BIBREF19 trained fastText word embeddings for 157 languages using Common Crawl and showed that using crawled data significantly increased the performance of the embeddings relatively to those trained only on Wikipedia.
<<</Non-contextual word embeddings for languages other than English>>>
<<<Contextualised models for languages other than English>>>
Following the success of large pretrained language models, they were extended to the multilingual setting with multilingual BERT , a single multilingual model for 104 different languages trained on Wikipedia data, and later XLM BIBREF12, which greatly improved unsupervised machine translation. A few monolingual models have been released: ELMo models for Japanese, Portuguese, German and Basque and BERT for Simplified and Traditional Chinese and German.
However, to the best of our knowledge, no particular effort has been made toward training models for languages other than English, at a scale similar to the latest English models (e.g. RoBERTa trained on more than 100GB of data).
<<</Contextualised models for languages other than English>>>
<<</Related Work>>>
<<<CamemBERT>>>
Our approach is based on RoBERTa BIBREF9, which replicates and improves the initial BERT by identifying key hyper-parameters for more robust performance.
In this section, we describe the architecture, training objective, optimisation setup and pretraining data that was used for CamemBERT.
CamemBERT differs from RoBERTa mainly with the addition of whole-word masking and the usage of SentencePiece tokenisation BIBREF20.
<<<Architecture>>>
Similar to RoBERTa and BERT, CamemBERT is a multi-layer bidirectional Transformer BIBREF21. Given the widespread usage of Transformers, we do not describe them in detail here and refer the reader to BIBREF21. CamemBERT uses the original BERT $_{\small \textsc {BASE}}$ configuration: 12 layers, 768 hidden dimensions, 12 attention heads, which amounts to 110M parameters.
<<</Architecture>>>
<<<Pretraining objective>>>
We train our model on the Masked Language Modeling (MLM) task. Given an input text sequence composed of $N$ tokens $x_1, ..., x_N$, we select $15\%$ of tokens for possible replacement. Among those selected tokens, 80% are replaced with the special $<$mask$>$ token, 10% are left unchanged and 10% are replaced by a random token. The model is then trained to predict the initial masked tokens using cross-entropy loss.
Following RoBERTa we dynamically mask tokens instead of fixing them statically for the whole dataset during preprocessing. This improves variability and makes the model more robust when training for multiple epochs.
Since we segment the input sentence into subwords using SentencePiece, the input tokens to the models can be subwords. An upgraded version of BERT and BIBREF22 have shown that masking whole words instead of individual subwords leads to improved performance. Whole-word masking (WWM) makes the training task more difficult because the model has to predict a whole word instead of predicting only part of the word given the rest. As a result, we used WWM for CamemBERT by first randomly sampling 15% of the words in the sequence and then considering all subword tokens in each of these 15% words for candidate replacement. This amounts to a proportion of selected tokens that is close to the original 15%. These tokens are then either replaced by $<$mask$>$ tokens (80%), left unchanged (10%) or replaced by a random token.
Subsequent work has shown that the next sentence prediction task (NSP) originally used in BERT does not improve downstream task performance BIBREF12, BIBREF9, we do not use NSP as a consequence.
<<</Pretraining objective>>>
<<<Optimisation>>>
Following BIBREF9, we optimise the model using Adam BIBREF23 ($\beta _1 = 0.9$, $\beta _2 = 0.98$) for 100k steps. We use large batch sizes of 8192 sequences. Each sequence contains at most 512 tokens. We enforce each sequence to only contain complete sentences. Additionally, we used the DOC-SENTENCES scenario from BIBREF9, consisting of not mixing multiple documents in the same sequence, which showed slightly better results.
<<</Optimisation>>>
<<<Segmentation into subword units>>>
We segment the input text into subword units using SentencePiece BIBREF20. SentencePiece is an extension of Byte-Pair encoding (BPE) BIBREF24 and WordPiece BIBREF25 that does not require pre-tokenisation (at the word or token level), thus removing the need for language-specific tokenisers. We use a vocabulary size of 32k subword tokens. These are learned on $10^7$ sentences sampled from the pretraining dataset. We do not use subword regularisation (i.e. sampling from multiple possible segmentations) in our implementation for simplicity.
<<</Segmentation into subword units>>>
<<<Pretraining data>>>
Pretrained language models can be significantly improved by using more data BIBREF9, BIBREF10. Therefore we used French text extracted from Common Crawl, in particular, we use OSCAR BIBREF13 a pre-classified and pre-filtered version of the November 2018 Common Craw snapshot.
OSCAR is a set of monolingual corpora extracted from Common Crawl, specifically from the plain text WET format distributed by Common Crawl, which removes all HTML tags and converts all text encodings to UTF-8. OSCAR follows the same approach as BIBREF19 by using a language classification model based on the fastText linear classifier BIBREF26, BIBREF27 pretrained on Wikipedia, Tatoeba and SETimes, which supports 176 different languages.
OSCAR performs a deduplication step after language classification and without introducing a specialised filtering scheme, other than only keeping paragraphs containing 100 or more UTF-8 encoded characters, making OSCAR quite close to the original Crawled data.
We use the unshuffled version of the French OSCAR corpus, which amounts to 138GB of uncompressed text and 32.7B SentencePiece tokens.
<<</Pretraining data>>>
<<</CamemBERT>>>
<<<Evaluation>>>
<<<Part-of-speech tagging and dependency parsing>>>
We fist evaluate CamemBERT on the two downstream tasks of part-of-speech (POS) tagging and dependency parsing. POS tagging is a low-level syntactic task, which consists in assigning to each word its corresponding grammatical category. Dependency parsing consists in predicting the labeled syntactic tree capturing the syntactic relations between words.
We run our experiments using the Universal Dependencies (UD) paradigm and its corresponding UD POS tag set BIBREF28 and UD treebank collection version 2.2 BIBREF29, which was used for the CoNLL 2018 shared task. We perform our work on the four freely available French UD treebanks in UD v2.2: GSD, Sequoia, Spoken, and ParTUT.
GSD BIBREF30 is the second-largest treebank available for French after the FTB (described in subsection SECREF25), it contains data from blogs, news articles, reviews, and Wikipedia. The Sequoia treebank BIBREF31, BIBREF32 comprises more than 3000 sentences, from the French Europarl, the regional newspaper L’Est Républicain, the French Wikipedia and documents from the European Medicines Agency. Spoken is a corpus converted automatically from the Rhapsodie treebank BIBREF33, BIBREF34 with manual corrections. It consists of 57 sound samples of spoken French with orthographic transcription and phonetic transcription aligned with sound (word boundaries, syllables, and phonemes), syntactic and prosodic annotations. Finally, ParTUT is a conversion of a multilingual parallel treebank developed at the University of Turin, and consisting of a variety of text genres, including talks, legal texts, and Wikipedia articles, among others; ParTUT data is derived from the already-existing parallel treebank Par(allel)TUT BIBREF35 . Table TABREF23 contains a summary comparing the sizes of the treebanks.
We evaluate the performance of our models using the standard UPOS accuracy for POS tagging, and Unlabeled Attachment Score (UAS) and Labeled Attachment Score (LAS) for dependency parsing. We assume gold tokenisation and gold word segmentation as provided in the UD treebanks.
<<<Baselines>>>
To demonstrate the value of building a dedicated version of BERT for French, we first compare CamemBERT to the multilingual cased version of BERT (designated as mBERT). We then compare our models to UDify BIBREF36. UDify is a multitask and multilingual model based on mBERT that is near state-of-the-art on all UD languages including French for both POS tagging and dependency parsing.
It is relevant to compare CamemBERT to UDify on those tasks because UDify is the work that pushed the furthest the performance in fine-tuning end-to-end a BERT-based model on downstream POS tagging and dependency parsing. Finally, we compare our model to UDPipe Future BIBREF37, a model ranked 3rd in dependency parsing and 6th in POS tagging during the CoNLL 2018 shared task BIBREF38. UDPipe Future provides us a strong baseline that does not make use of any pretrained contextual embedding.
We will compare to the more recent cross-lingual language model XLM BIBREF12, as well as the state-of-the-art CoNLL 2018 shared task results with predicted tokenisation and segmentation in an updated version of the paper.
<<</Baselines>>>
<<</Part-of-speech tagging and dependency parsing>>>
<<<Named Entity Recognition>>>
Named Entity Recognition (NER) is a sequence labeling task that consists in predicting which words refer to real-world objects, such as people, locations, artifacts and organisations. We use the French Treebank (FTB) BIBREF39 in its 2008 version introduced by cc-clustering:09short and with NER annotations by sagot2012annotation. The NER-annotated FTB contains more than 12k sentences and more than 350k tokens extracted from articles of the newspaper Le Monde published between 1989 and 1995. In total, it contains 11,636 entity mentions distributed among 7 different types of entities, namely: 2025 mentions of “Person”, 3761 of “Location”, 2382 of “Organisation”, 3357 of “Company”, 67 of “Product”, 15 of “POI” (Point of Interest) and 29 of “Fictional Character”.
A large proportion of the entity mentions in the treebank are multi-word entities. For NER we therefore report the 3 metrics that are commonly used to evaluate models: precision, recall, and F1 score. Here precision measures the percentage of entities found by the system that are correctly tagged, recall measures the percentage of named entities present in the corpus that are found and the F1 score combines both precision and recall measures giving a general idea of a model's performance.
<<</Named Entity Recognition>>>
<<<Natural Language Inference>>>
We also evaluate our model on the Natural Language Inference (NLI) task, using the French part of the XNLI dataset BIBREF50. NLI consists in predicting whether a hypothesis sentence is entailed, neutral or contradicts a premise sentence.
The XNLI dataset is the extension of the Multi-Genre NLI (MultiNLI) corpus BIBREF51 to 15 languages by translating the validation and test sets manually into each of those languages. The English training set is also machine translated for all languages. The dataset is composed of 122k train, 2490 valid and 5010 test examples. As usual, NLI performance is evaluated using accuracy.
To evaluate a model on a language other than English (such as French), we consider the two following settings:
TRANSLATE-TEST: The French test set is machine translated into English, and then used with an English classification model. This setting provides a reasonable, although imperfect, way to circumvent the fact that no such data set exists for French, and results in very strong baseline scores.
TRANSLATE-TRAIN: The French model is fine-tuned on the machine-translated English training set and then evaluated on the French test set. This is the setting that we used for CamemBERT.
<<</Natural Language Inference>>>
<<</Evaluation>>>
<<<Experiments>>>
In this section, we measure the performance of CamemBERT by evaluating it on the four aforementioned tasks: POS tagging, dependency parsing, NER and NLI.
<<<Experimental Setup>>>
<<<Pretraining>>>
We use the RoBERTa implementation in the fairseq library BIBREF53. Our learning rate is warmed up for 10k steps up to a peak value of $0.0007$ instead of the original $0.0001$ given our large batch size (8192). The learning rate fades to zero with polynomial decay. We pretrain our model on 256 Nvidia V100 GPUs (32GB each) for 100k steps during 17h.
<<</Pretraining>>>
<<<Fine-tuning>>>
For each task, we append the relevant predictive layer on top of CamemBERT's Transformer architecture. Following the work done on BERT BIBREF7, for sequence tagging and sequence labeling we append a linear layer respectively to the $<$s$>$ special token and to the first subword token of each word. For dependency parsing, we plug a bi-affine graph predictor head as inspired by BIBREF54 following the work done on multilingual parsing with BERT by BIBREF36. We refer the reader to these two articles for more details on this module.
We fine-tune independently CamemBERT for each task and each dataset. We optimise the model using the Adam optimiser BIBREF23 with a fixed learning rate. We run a grid search on a combination of learning rates and batch sizes. We select the best model on the validation set out of the 30 first epochs.
Although this might push the performances even further, for all tasks except NLI, we don't apply any regularisation techniques such as weight decay, learning rate warm-up or discriminative fine-tuning. We show that fine-tuning CamemBERT in a straight-forward manner leads to state-of-the-art results on most tasks and outperforms the existing BERT-based models in most cases.
The POS tagging, dependency parsing, and NER experiments are run using hugging face's Transformer library extended to support CamemBERT and dependency parsing BIBREF55. The NLI experiments use the fairseq library following the RoBERTa implementation.
<<</Fine-tuning>>>
<<</Experimental Setup>>>
<<<Results>>>
<<<Part-of-Speech tagging and dependency parsing>>>
For POS tagging and dependency parsing, we compare CamemBERT to three other near state-of-the-art models in Table TABREF32. CamemBERT outperforms UDPipe Future by a large margin for all treebanks and all metrics. Despite a much simpler optimisation process, CamemBERT beats UDify performances on all the available French treebanks.
CamemBERT also demonstrates higher performances than mBERT on those tasks. We observe a larger error reduction for parsing than for tagging. For POS tagging, we observe error reductions of respectively 0.71% for GSD, 0.81% for Sequoia, 0.7% for Spoken and 0.28% for ParTUT. For parsing, we observe error reductions in LAS of 2.96% for GSD, 3.33% for Sequoia, 1.70% for Spoken and 1.65% for ParTUT.
<<</Part-of-Speech tagging and dependency parsing>>>
<<<Natural Language Inference: XNLI>>>
On the XNLI benchmark, CamemBERT obtains improved performance over multilingual language models on the TRANSLATE-TRAIN setting (81.2 vs. 80.2 for XLM) while using less than half the parameters (110M vs. 250M). However, its performance still lags behind models trained on the original English training set in the TRANSLATE-TEST setting, 81.2 vs. 82.91 for RoBERTa. It should be noted that CamemBERT uses far fewer parameters than RoBERTa (110M vs. 355M parameters).
<<</Natural Language Inference: XNLI>>>
<<<Named-Entity Recognition>>>
For named entity recognition, our experiments show that CamemBERT achieves a slightly better precision than the traditional CRF-based SEM architectures described above in Section SECREF25 (CRF and Bi-LSTM+CRF), but shows a dramatic improvement in finding entity mentions, raising the recall score by 3.5 points. Both improvements result in a 2.36 point increase in the F1 score with respect to the best SEM architecture (BiLSTM-CRF), giving CamemBERT the state of the art for NER on the FTB. One other important finding is the results obtained by mBERT. Previous work with this model showed increased performance in NER for German, Dutch and Spanish when mBERT is used as contextualised word embedding for an NER-specific model BIBREF48, but our results suggest that the multilingual setting in which mBERT was trained is simply not enough to use it alone and fine-tune it for French NER, as it shows worse performance than even simple CRF models, suggesting that monolingual models could be better at NER.
<<</Named-Entity Recognition>>>
<<</Results>>>
<<<Discussion>>>
CamemBERT displays improved performance compared to prior work for the 4 downstream tasks considered. This confirms the hypothesis that pretrained language models can be effectively fine-tuned for various downstream tasks, as observed for English in previous work. Moreover, our results also show that dedicated monolingual models still outperform multilingual ones. We explain this point in two ways. First, the scale of data is possibly essential to the performance of CamemBERT. Indeed, we use 138GB of uncompressed text vs. 57GB for mBERT. Second, with more data comes more diversity in the pretraining distribution. Reaching state-of-the-art performances on 4 different tasks and 6 different datasets requires robust pretrained models. Our results suggest that the variability in the downstream tasks and datasets considered is handled more efficiently by a general language model than by Wikipedia-pretrained models such as mBERT.
<<</Discussion>>>
<<</Experiments>>>
<<<Conclusion>>>
CamemBERT improves the state of the art for multiple downstream tasks in French. It is also lighter than other BERT-based approaches such as mBERT or XLM. By releasing our model, we hope that it can serve as a strong baseline for future research in French NLP, and expect our experiments to be reproduced in many other languages. We will publish an updated version in the near future where we will explore and release models trained for longer, with additional downstream tasks, baselines (e.g. XLM) and analysis, we will also train additional models with potentially cleaner corpora such as CCNet BIBREF56 for more accurate performance evaluation and more complete ablation.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nFrom non-contextual to contextual word embeddings\nNon-contextual word embeddings for languages other than English\nContextualised models for languages other than English\nCamemBERT\nArchitecture\nPretraining objective\nOptimisation\nSegmentation into subword units\nPretraining data\nEvaluation\nPart-of-speech tagging and dependency parsing\nBaselines\nNamed Entity Recognition\nNatural Language Inference\nExperiments\nExperimental Setup\nPretraining\nFine-tuning\nResults\nPart-of-Speech tagging and dependency parsing\nNatural Language Inference: XNLI\nNamed-Entity Recognition\nDiscussion\nConclusion"
],
"type": "outline"
}
|
1912.01673
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
COSTRA 1.0: A Dataset of Complex Sentence Transformations
<<<Abstract>>>
COSTRA 1.0 is a dataset of Czech complex sentence transformations. The dataset is intended for the study of sentence-level embeddings beyond simple word alternations or standard paraphrasing. ::: The dataset consist of 4,262 unique sentences with average length of 10 words, illustrating 15 types of modifications such as simplification, generalization, or formal and informal language variation. ::: The hope is that with this dataset, we should be able to test semantic properties of sentence embeddings and perhaps even to find some topologically interesting “skeleton” in the sentence embedding space.
<<</Abstract>>>
<<<Introduction>>>
Vector representations are becoming truly essential in majority of natural language processing tasks. Word embeddings became widely popular with the introduction of word2vec BIBREF0 and GloVe BIBREF1 and their properties have been analyzed in length from various aspects.
Studies of word embeddings range from word similarity BIBREF2, BIBREF3, over the ability to capture derivational relations BIBREF4, linear superposition of multiple senses BIBREF5, the ability to predict semantic hierarchies BIBREF6 or POS tags BIBREF7 up to data efficiency BIBREF8.
Several studies BIBREF9, BIBREF10, BIBREF11, BIBREF12 show that word vector representations are capable of capturing meaningful syntactic and semantic regularities. These include, for example, male/female relation demonstrated by the pairs “man:woman”, “king:queen” and the country/capital relation (“Russia:Moscow”, “Japan:Tokyo”). These regularities correspond to simple arithmetic operations in the vector space.
Sentence embeddings are becoming equally ubiquitous in NLP, with novel representations appearing almost every other week. With an overwhelming number of methods to compute sentence vector representations, the study of their general properties becomes difficult. Furthermore, it is not so clear in which way the embeddings should be evaluated.
In an attempt to bring together more traditional representations of sentence meanings and the emerging vector representations, bojar:etal:jnle:representations:2019 introduce a number of aspects or desirable properties of sentence embeddings. One of them is denoted as “relatability”, which highlights the correspondence between meaningful differences between sentences and geometrical relations between their respective embeddings in the highly dimensional continuous vector space. If such a correspondence could be found, we could use geometrical operations in the space to induce meaningful changes in sentences.
In this work, we present COSTRA, a new dataset of COmplex Sentence TRAnsformations. In its first version, the dataset is limited to sample sentences in Czech. The goal is to support studies of semantic and syntactic relations between sentences in the continuous space. Our dataset is the prerequisite for one of possible ways of exploring sentence meaning relatability: we envision that the continuous space of sentences induced by an ideal embedding method would exhibit topological similarity to the graph of sentence variations. For instance, one could argue that a subset of sentences could be organized along a linear scale reflecting the formalness of the language used. Another set of sentences could form a partially ordered set of gradually less and less concrete statements. And yet another set, intersecting both of the previous ones in multiple sentences could be partially or linearly ordered according to the strength of the speakers confidence in the claim.
Our long term goal is to search for an embedding method which exhibits this behaviour, i.e. that the topological map of the embedding space corresponds to meaningful operations or changes in the set of sentences of a language (or more languages at once). We prefer this behaviour to emerge, as it happened for word vector operations, but regardless if the behaviour is emergent or trained, we need a dataset of sentences illustrating these patterns. If large enough, such a dataset could serve for training. If it will be smaller, it will provide a test set. In either case, these sentences could provide a “skeleton” to the continuous space of sentence embeddings.
The paper is structured as follows: related summarizes existing methods of sentence embeddings evaluation and related work. annotation describes our methodology for constructing our dataset. data details the obtained dataset and some first observations. We conclude and provide the link to the dataset in conclusion
<<</Introduction>>>
<<<Background>>>
As hinted above, there are many methods of converting a sequence of words into a vector in a highly dimensional space. To name a few: BiLSTM with the max-pooling trained for natural language inference BIBREF13, masked language modeling and next sentence prediction using bidirectional Transformer BIBREF14, max-pooling last states of neural machine translation among many languages BIBREF15 or the encoder final state in attentionless neural machine translation BIBREF16.
The most common way of evaluating methods of sentence embeddings is extrinsic, using so called `transfer tasks', i.e. comparing embeddings via the performance in downstream tasks such as paraphrasing, entailment, sentence sentiment analysis, natural language inference and other assignments. However, even simple bag-of-words (BOW) approaches achieve often competitive results on such tasks BIBREF17.
Adi16 introduce intrinsic evaluation by measuring the ability of models to encode basic linguistic properties of a sentence such as its length, word order, and word occurrences. These so called `probing tasks' are further extended by a depth of the syntactic tree, top constituent or verb tense by DBLP:journals/corr/abs-1805-01070.
Both transfer and probing tasks are integrated in SentEval BIBREF18 framework for sentence vector representations. Later, Perone2018 applied SentEval to eleven different encoding methods revealing that there is no consistently well performing method across all tasks. SentEval was further criticized for pitfalls such as comparing different embedding sizes or correlation between tasks BIBREF19, BIBREF20.
shi-etal-2016-string show that NMT encoder is able to capture syntactic information about the source sentence. DBLP:journals/corr/BelinkovDDSG17 examine the ability of NMT to learn morphology through POS and morphological tagging.
Still, very little is known about semantic properties of sentence embeddings. Interestingly, cifka:bojar:meanings:2018 observe that the better self-attention embeddings serve in NMT, the worse they perform in most of SentEval tasks.
zhu-etal-2018-exploring generate automatically sentence variations such as:
Original sentence: A rooster pecked grain.
Synonym Substitution: A cock pecked grain.
Not-Negation: A rooster didn't peck grain.
Quantifier-Negation: There was no rooster pecking grain.
and compare their triplets by examining distances between their embeddings, i.e. distance between (1) and (2) should be smaller than distances between (1) and (3), (2) and (3), similarly, (3) and (4) should be closer together than (1)–(3) or (1)–(4).
In our previous study BIBREF21, we examined the effect of small sentence alternations in sentence vector spaces. We used sentence pairs automatically extracted from datasets for natural language inference BIBREF22, BIBREF23 and observed, that the simple vector difference, familiar from word embeddings, serves reasonably well also in sentence embedding spaces. The examined relations were however very simple: a change of gender, number, addition of an adjective, etc. The structure of the sentence and its wording remained almost identical.
We would like to move to more interesting non-trivial sentence comparison, beyond those in zhu-etal-2018-exploring or BaBo2019, such as change of style of a sentence, the introduction of a small modification that drastically changes the meaning of a sentence or reshuffling of words in a sentence that alters its meaning.
Unfortunately, such a dataset cannot be generated automatically and it is not available to our best knowledge. We try to start filling this gap with COSTRA 1.0.
<<</Background>>>
<<<Annotation>>>
We acquired the data in two rounds of annotation. In the first one, we were looking for original and uncommon sentence change suggestions. In the second one, we collected sentence alternations using ideas from the first round. The first and second rounds of annotation could be broadly called as collecting ideas and collecting data, respectively.
<<<First Round: Collecting Ideas>>>
We manually selected 15 newspaper headlines. Eleven annotators were asked to modify each headline up to 20 times and describe the modification with a short name. They were given an example sentence and several of its possible alternations, see tab:firstroundexamples.
Unfortunately, these examples turned out to be highly influential on the annotators' decisions and they correspond to almost two thirds of all of modifications gathered in the first round. Other very common transformations include change of a word order or transformation into a interrogative/imperative sentence.
Other interesting modification were also proposed such as change into a fairy-tale style, excessive use of diminutives/vulgarisms or dadaism—a swap of roles in the sentence so that the resulting sentence is grammatically correct but nonsensical in our world. Of these suggestions, we selected only the dadaistic swap of roles for the current exploration (see nonsense in Table TABREF7).
In total, we collected 984 sentences with 269 described unique changes. We use them as an inspiration for second round of annotation.
<<</First Round: Collecting Ideas>>>
<<<Second Round: Collecting Data>>>
<<<Sentence Transformations>>>
We selected 15 modifications types to collect COSTRA 1.0. They are presented in annotationinstructions.
We asked for two distinct paraphrases of each sentence because we believe that a good sentence embedding should put paraphrases close together in vector space.
Several modification types were specifically selected to constitute a thorough test of embeddings. In different meaning, the annotators should create a sentence with some other meaning using the same words as the original sentence. Other transformations which should be difficult for embeddings include minimal change, in which the sentence meaning should be significantly changed by using only very small modification, or nonsense, in which words of the source sentence should be shuffled so that it is grammatically correct, but without any sense.
<<</Sentence Transformations>>>
<<<Seed Data>>>
The source sentences for annotations were selected from Czech data of Global Voices BIBREF24 and OpenSubtitles BIBREF25. We used two sources in order to have different styles of seed sentences, both journalistic and common spoken language. We considered only sentences with more than 5 and less than 15 words and we manually selected 150 of them for further annotation. This step was necessary to remove sentences that are:
too unreal, out of this world, such as:
Jedno fotonový torpédo a je z tebe vesmírná topinka.
“One photon torpedo and you're a space toast.”
photo captions (i.e. incomplete sentences), e.g.:
Zvláštní ekvádorský případ Correa vs. Crudo
“Specific Ecuadorian case Correa vs. Crudo”
too vague, overly dependent on the context:
Běž tam a mluv na ni.
“Go there and speak to her.”
Many of the intended sentence transformations would be impossible to apply to such sentences and annotators' time would be wasted. Even after such filtering, it was still quite possible that a desired sentence modification could not be achieved for a sentence. For such a case, we gave the annotators the option to enter the keyword IMPOSSIBLE instead of the particular (impossible) modification.
This option allowed to explicitly state that no such transformation is possible. At the same time most of the transformations are likely to lead to a large number possible outcomes. As documented in scratching2013, Czech sentence might have hundreds of thousand of paraphrases. To support some minimal exploration of this possible diversity, most of sentences were assigned to several annotators.
<<</Seed Data>>>
<<<Spell-Checking>>>
The annotation is a challenging task and the annotators naturally make mistakes. Unfortunately, a single typo can significantly influence the resulting embedding BIBREF26. After collecting all the sentence variations, we applied the statistical spellchecker and grammar checker Korektor BIBREF27 in order to minimize influence of typos to performance of embedding methods. We manually inspected 519 errors identified by Korektor and fixed 129, which were identified correctly.
<<</Spell-Checking>>>
<<</Second Round: Collecting Data>>>
<<</Annotation>>>
<<<Dataset Description>>>
In the second round, we collected 293 annotations from 12 annotators. After Korektor, there are 4262 unique sentences (including 150 seed sentences) that form the COSTRA 1.0 dataset. Statistics of individual annotators are available in tab:statistics.
The time needed to carry out one piece of annotation (i.e. to provide one seed sentence with all 15 transformations) was on average almost 20 minutes but some annotators easily needed even half an hour. Out of the 4262 distinct sentences, only 188 was recorded more than once. In other words, the chance of two annotators producing the same output string is quite low. The most repeated transformations are by far past, future and ban. The least repeated is paraphrase with only single one repeated.
multiple-annots documents this in another way. The 293 annotations are split into groups depending on how many annotators saw the same input sentence: 30 annotations were annotated by one person only, 30 annotations by two different persons etc. The last column shows the number of unique outputs obtained in that group. Across all cases, 96.8% of produced strings were unique.
In line with instructions, the annotators were using the IMPOSSIBLE option scarcely (95 times, i.e. only 2%). It was also a case of 7 annotators only; the remaining 5 annotators were capable of producing all requested transformations. The top three transformations considered unfeasible were different meaning (using the same set of words), past (esp. for sentences already in the past tense) and simple sentence.
<<<First Observations>>>
We embedded COSTRA sentences with LASER BIBREF15, the method that performed very well in revealing linear relations in BaBo2019. Having browsed a number of 2D visualizations (PCA and t-SNE) of the space, we have to conclude that visually, LASER space does not seem to exhibit any of the desired topological properties discussed above, see fig:pca for one example.
The lack of semantic relations in the LASER space is also reflected in vector similarities, summarized in similarities. The minimal change operation substantially changed the meaning of the sentence, and yet the embedding of the transformation lies very closely to the original sentence (average similarity of 0.930). Tense changes and some form of negation or banning also keep the vectors very similar.
The lowest average similarity was observed for generalization (0.739) and simplification (0.781), which is not any bad sign. However the fact that paraphrases have much smaller similarity (0.826) than opposite meaning (0.902) documents that the vector space lacks in terms of “relatability”.
<<</First Observations>>>
<<</Dataset Description>>>
<<<Conclusion and Future Work>>>
We presented COSTRA 1.0, a small corpus of complex transformations of Czech sentences.
We plan to use this corpus to analyze a wide spectrum sentence embeddings methods to see to what extent the continuous space they induce reflects semantic relations between sentences in our corpus. The very first analysis using LASER embeddings indicates lack of “meaning relatability”, i.e. the ability to move along a trajectory in the space in order to reach desired sentence transformations. Actually, not even paraphrases are found in close neighbourhoods of embedded sentences. More “semantic” sentence embeddings methods are thus to be sought for.
The corpus is freely available at the following link:
http://hdl.handle.net/11234/1-3123
Aside from extending the corpus in Czech and adding other language variants, we are also considering to wrap COSTRA 1.0 into an API such as SentEval, so that it is very easy for researchers to evaluate their sentence embeddings in terms of “relatability”.
<<</Conclusion and Future Work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nBackground\nAnnotation\nFirst Round: Collecting Ideas\nSecond Round: Collecting Data\nSentence Transformations\nSeed Data\nSpell-Checking\nDataset Description\nFirst Observations\nConclusion and Future Work"
],
"type": "outline"
}
|
1909.00088
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Keep Calm and Switch On! Preserving Sentiment and Fluency in Semantic Text Exchange
<<<Abstract>>>
In this paper, we present a novel method for measurably adjusting the semantics of text while preserving its sentiment and fluency, a task we call semantic text exchange. This is useful for text data augmentation and the semantic correction of text generated by chatbots and virtual assistants. We introduce a pipeline called SMERTI that combines entity replacement, similarity masking, and text infilling. We measure our pipeline's success by its Semantic Text Exchange Score (STES): the ability to preserve the original text's sentiment and fluency while adjusting semantic content. We propose to use masking (replacement) rate threshold as an adjustable parameter to control the amount of semantic change in the text. Our experiments demonstrate that SMERTI can outperform baseline models on Yelp reviews, Amazon reviews, and news headlines.
<<</Abstract>>>
<<<Introduction>>>
There has been significant research on style transfer, with the goal of changing the style of text while preserving its semantic content. The alternative where semantics are adjusted while keeping style intact, which we call semantic text exchange (STE), has not been investigated to the best of our knowledge. Consider the following example, where the replacement entity defines the new semantic context:
Original Text: It is sunny outside! Ugh, that means I must wear sunscreen. I hate being sweaty and sticky all over. Replacement Entity: weather = rainy Desired Text: It is rainy outside! Ugh, that means I must bring an umbrella. I hate being wet and having to carry it around.
The weather within the original text is sunny, whereas the actual weather may be rainy. Not only is the word sunny replaced with rainy, but the rest of the text's content is changed while preserving its negative sentiment and fluency. With the rise of natural language processing (NLP) has come an increased demand for massive amounts of text data. Manually collecting and scraping data requires a significant amount of time and effort, and data augmentation techniques for NLP are limited compared to fields such as computer vision. STE can be used for text data augmentation by producing various modifications of a piece of text that differ in semantic content.
Another use of STE is in building emotionally aligned chatbots and virtual assistants. This is useful for reasons such as marketing, overall enjoyment of interaction, and mental health therapy. However, due to limited data with emotional content in specific semantic contexts, the generated text may contain incorrect semantic content. STE can adjust text semantics (e.g. to align with reality or a specific task) while preserving emotions.
One specific example is the development of virtual assistants with adjustable socio-emotional personalities in the effort to construct assistive technologies for persons with cognitive disabilities. Adjusting the emotional delivery of text in subtle ways can have a strong effect on the adoption of the technologies BIBREF0. It is challenging to transfer style this subtly due to lack of datasets on specific topics with consistent emotions. Instead, large datasets of emotionally consistent interactions not confined to specific topics exist. Hence, it is effective to generate text with a particular emotion and then adjust its semantics.
We propose a pipeline called SMERTI (pronounced `smarty') for STE. Combining entity replacement (ER), similarity masking (SM), and text infilling (TI), SMERTI can modify the semantic content of text. We define a metric called the Semantic Text Exchange Score (STES) that evaluates the overall ability of a model to perform STE, and an adjustable parameter masking (replacement) rate threshold (MRT/RRT) that can be used to control the amount of semantic change.
We evaluate on three datasets: Yelp and Amazon reviews BIBREF1, and Kaggle news headlines BIBREF2. We implement three baseline models for comparison: Noun WordNet Semantic Text Exchange Model (NWN-STEM), General WordNet Semantic Text Exchange Model (GWN-STEM), and Word2Vec Semantic Text Exchange Model (W2V-STEM).
We illustrate the STE performance of two SMERTI variations on the datasets, demonstrating outperformance of the baselines and pipeline stability. We also run a human evaluation supporting our results. We analyze the results in detail and investigate relationships between the semantic change, fluency, sentiment, and MRT/RRT. Our major contributions can be summarized as:
We define a new task called semantic text exchange (STE) with increasing importance in NLP applications that modifies text semantics while preserving other aspects such as sentiment.
We propose a pipeline SMERTI capable of multi-word entity replacement and text infilling, and demonstrate its outperformance of baselines.
We define an evaluation metric for overall performance on semantic text exchange called the Semantic Text Exchange Score (STES).
<<</Introduction>>>
<<<Related Work>>>
<<<Word and Sentence-level Embeddings>>>
Word2Vec BIBREF3, BIBREF4 allows for analogy representation through vector arithmetic. We implement a baseline (W2V-STEM) using this technique. The Universal Sentence Encoder (USE) BIBREF5 encodes sentences and is trained on a variety of web sources and the Stanford Natural Language Inference corpus BIBREF6. Flair embeddings BIBREF7 are based on architectures such as BERT BIBREF8. We use USE for SMERTI as it is designed for transfer learning and shows higher performance on textual similarity tasks compared to other models BIBREF9.
<<</Word and Sentence-level Embeddings>>>
<<<Text Infilling>>>
Text infilling is the task of filling in missing parts of sentences called masks. MaskGAN BIBREF10 is restricted to a single word per mask token, while SMERTI is capable of variable length infilling for more flexible output. BIBREF11 uses a transformer-based architecture. They fill in random masks, while SMERTI fills in masks guided by semantic similarity, resulting in more natural infilling and fulfillment of the STE task.
<<</Text Infilling>>>
<<<Style and Sentiment Transfer>>>
Notable works in style/sentiment transfer include BIBREF12, BIBREF13, BIBREF14, BIBREF15. They attempt to learn latent representations of various text aspects such as its context and attributes, or separate style from content and encode them into hidden representations. They then use an RNN decoder to generate a new sentence given a targeted sentiment attribute.
<<</Style and Sentiment Transfer>>>
<<<Review Generation>>>
BIBREF16 generates fake reviews from scratch using language models. BIBREF17, BIBREF18, BIBREF19 generate reviews from scratch given auxiliary information (e.g. the item category and star rating). BIBREF20 generates reviews using RNNs with two components: generation from scratch and review customization (Algorithm 2 in BIBREF20). They define review customization as modifying the generated review to fit a new topic or context, such as from a Japanese restaurant to an Italian one. They condition on a keyword identifying the desired context, and replace similar nouns with others using WordNet BIBREF21. They require a “reference dataset" (required to be “on topic"; easy enough for restaurant reviews, but less so for arbitrary conversational agents). As noted by BIBREF19, the method of BIBREF20 may also replace words independently of context. We implement their review customization algorithm (NWN-STEM) and a modified version (GWN-STEM) as baseline models.
<<</Review Generation>>>
<<</Related Work>>>
<<<SMERTI>>>
<<<Overview>>>
The task is to transform a corpus $C$ of lines of text $S_i$ and associated replacement entities $RE_i:C = \lbrace (S_1,RE_1),(S_2,RE_2),\ldots , (S_n, RE_n)\rbrace $ to a modified corpus $\hat{C} = \lbrace \hat{S}_1,\hat{S}_2,\ldots ,\hat{S}_n\rbrace $, where $\hat{S}_i$ are the original text lines $S_i$ replaced with $RE_i$ and overall semantics adjusted. SMERTI consists of the following modules, shown in Figure FIGREF15:
Entity Replacement Module (ERM): Identify which word(s) within the original text are best replaced with the $RE$, which we call the Original Entity ($OE$). We replace $OE$ in $S$ with $RE$. We call this modified text $S^{\prime }$.
Similarity Masking Module (SMM): Identify words/phrases in $S^{\prime }$ similar to $OE$ and replace them with a [mask]. Group adjacent [mask]s into a single one so we can fill a variable length of text into each. We call this masked text $S^{\prime \prime }$.
Text Infilling Module (TIM): Fill in [mask] tokens with text that better suits the $RE$. This will modify semantics in the rest of the text. This final output text is called $\hat{S}$.
<<</Overview>>>
<<<Entity Replacement Module (ERM)>>>
For entity replacement, we use a combination of the Universal Sentence Encoder BIBREF5 and Stanford Parser BIBREF22.
<<<Stanford Parser>>>
The Stanford Parser is a constituency parser that determines the grammatical structure of sentences, including phrases and part-of-speech (POS) labelling. By feeding our $RE$ through the parser, we are able to determine its parse-tree. Iterating through the parse-tree and its sub-trees, we can obtain a list of constituent tags for the $RE$. We then feed our input text $S$ through the parser, and through a similar process, we can obtain a list of leaves (where leaves under a single label are concatenated) that are equal or similar to any of the $RE$ constituent tags. This generates a list of entities having the same (or similar) grammatical structure as the $RE$, and are likely candidates for the $OE$. We then feed these entities along with the $RE$ into the Universal Sentence Encoder (USE).
<<</Stanford Parser>>>
<<<Universal Sentence Encoder (USE)>>>
The USE is a sentence-level embedding model that comes with a deep averaging network (DAN) and transformer model BIBREF5. We choose the transformer model as these embeddings take context into account, and the exact same word/phrase will have a different embedding depending on its context and surrounding words.
We compute the semantic similarity between two embeddings $u$ and $v$: $sim(u,v)$, using the angular (cosine) distance, defined as: $\cos (\theta _{u,v}) = (u\cdot v)/(||u|| ||v||)$, such that $sim(u,v) = 1-\frac{1}{\pi }arccos(\cos (\theta _{u,v}))$. Results are in $[0,1]$, with higher values representing greater similarity.
Using USE and the above equation, we can identify words/phrases within the input text $S$ which are most similar to $RE$. To assist with this, we use the Stanford Parser as described above to obtain a list of candidate entities. In the rare case that this list is empty, we feed in each word of $S$ into USE, and identify which word is the most similar to $RE$. We then replace the most similar entity or word ($OE$) with the $RE$ and generate $S^{\prime }$.
An example of this entity replacement process is in Figure FIGREF18. Two parse-trees are shown: for $RE$ (a) and $S$ (b) and (c). Figure FIGREF18(d) is a semantic similarity heat-map generated from the USE embeddings of the candidate $OE$s and $RE$, where values are similarity scores in the range $[0,1]$.
As seen in Figure FIGREF18(d), we calculate semantic similarities between $RE$ and entities within $S$ which have noun constituency tags. Looking at the row for our $RE$ restaurant, the most similar entity (excluding itself) is hotel. We can then generate:
$S^{\prime }$ = i love this restaurant ! the beds are comfortable and the service is great !
<<</Universal Sentence Encoder (USE)>>>
<<</Entity Replacement Module (ERM)>>>
<<<Similarity Masking Module (SMM)>>>
Next, we mask words similar to $OE$ to generate $S^{\prime \prime }$ using USE. We look at semantic similarities between every word in $S$ and $OE$, along with semantic similarities between $OE$ and the candidate entities determined in the previous ERM step to broaden the range of phrases our module can mask. We ignore $RE$, $OE$, and any entities or phrases containing $OE$ (for example, `this hotel').
After determining words similar to the $OE$ (discussed below), we replace each of them with a [mask] token. Next, we replace [mask] tokens adjacent to each other with a single [mask].
We set a base similarity threshold (ST) that selects a subset of words to mask. We compare the actual fraction of masked words to the masking rate threshold (MRT), as defined by the user, and increase ST in intervals of $0.05$ until the actual masking rate falls below the MRT. Some sample masked outputs ($S^{\prime \prime }$) using various MRT-ST combinations for the previous example are shown in Table TABREF21 (more examples in Appendix A).
The MRT is similar to the temperature parameter used to control the “novelty” of generated text in works such as BIBREF20. A high MRT means the user wants to generate text very semantically dissimilar to the original, and may be desired in cases such as creating a lively chatbot or correcting text that is heavily incorrect semantically. A low MRT means the user wants to generate text semantically similar to the original, and may be desired in cases such as text recovery, grammar correction, or correcting a minor semantic error in text. By varying the MRT, various pieces of text that differ semantically in subtle ways can be generated, assisting greatly with text data augmentation. The MRT also affects sentiment and fluency, as we show in Section SECREF59.
<<</Similarity Masking Module (SMM)>>>
<<<Text Infilling Module (TIM)>>>
We use two seq2seq models for our TIM: an RNN (recurrent neural network) model BIBREF23 (called SMERTI-RNN), and a transformer model (called SMERTI-Transformer).
<<<Bidirectional RNN with Attention>>>
We use a bidirectional variant of the GRU BIBREF24, and hence two RNNs for the encoder: one reads the input sequence in standard sequential order, and the other is fed this sequence in reverse. The outputs are summed at each time step, giving us the ability to encode information from both past and future context.
The decoder generates the output in a sequential token-by-token manner. To combat information loss, we implement the attention mechanism BIBREF25. We use a Luong attention layer BIBREF26 which uses global attention, where all the encoder's hidden states are considered, and use the decoder's current time-step hidden state to calculate attention weights. We use the dot score function for attention, where $h_t$ is the current target decoder state and $\bar{h}_s$ is all encoder states: $score(h_t,\bar{h}_s)=h_t^T\bar{h}_s$.
<<</Bidirectional RNN with Attention>>>
<<<Transformer>>>
Our second model makes use of the transformer architecture, and our implementation replicates BIBREF27. We use an encoder-decoder structure with a multi-head self-attention token decoder to condition on information from both past and future context. It maps a query and set of key-value pairs to an output. The queries and keys are of dimension $d_k$, and values of dimension $d_v$. To compute the attention, we pack a set of queries, keys, and values into matrices $Q$, $K$, and $V$, respectively. The matrix of outputs is computed as:
Multi-head attention allows the model to jointly attend to information from different positions. The decoder can make use of both local and global semantic information while filling in each [mask].
<<</Transformer>>>
<<</Text Infilling Module (TIM)>>>
<<</SMERTI>>>
<<<Experiment>>>
<<<Datasets>>>
We train our two TIMs on the three datasets. The Amazon dataset BIBREF1 contains over 83 million user reviews on products, with duplicate reviews removed. The Yelp dataset includes over six million user reviews on businesses. The news headlines dataset from Kaggle contains approximately $200,000$ news headlines from 2012 to 2018 obtained from HuffPost BIBREF2.
We filter the text to obtain reviews and headlines which are English, do not contain hyperlinks and other obvious noise, and are less than 20 words long. We found that many longer than twenty words ramble on and are too verbose for our purposes. Rather than filtering by individual sentences we keep each text in its entirety so SMERTI can learn to generate multiple sentences at once. We preprocess the text by lowercasing and removing rare/duplicate punctuation and space.
For Amazon and Yelp, we treat reviews greater than three stars as containing positive sentiment, equal to three stars as neutral, and less than three stars as negative. For each training and testing set, we include an equal number of randomly selected positive and negative reviews, and half as many neutral reviews. This is because neutral reviews only occupy one out of five stars compared to positive and negative which occupy two each. Our dataset statistics can be found in Appendix B.
<<</Datasets>>>
<<<Experiment Details>>>
To set up our training and testing data for text infilling, we mask the text. We use a tiered masking approach: for each dataset, we randomly mask 15% of the words in one-third of the lines, 30% of the words in another one-third, and 45% in the remaining one-third. These masked texts serve as the inputs, while the original texts serve as the ground-truth. This allows our TIM models to learn relationships between masked words and relationships between masked and unmasked words.
The bidirectional RNN decoder fills in blanks one by one, with the objective of minimizing the cross entropy loss between its output and the ground-truth. We use a hidden size of 500, two layers for the encoder and decoder, teacher-forcing ratio of 1.0, learning rate of 0.0001, dropout of 0.1, batch size of 64, and train for up to 40 epochs.
For the transformer, we use scaled dot-product attention and the same hyperparameters as BIBREF27. We use the Adam optimizer BIBREF28 with $\beta _1 = 0.9, \beta _2 = 0.98$, and $\epsilon = 10^{-9}$. As in BIBREF27, we increase the $learning\_rate$ linearly for the first $warmup\_steps$ training steps, and then decrease the $learning\_rate$ proportionally to the inverse square root of the step number. We set $factor=1$ and use $warmup\_steps = 2000$. We use a batch size of 4096, and we train for up to 40 epochs.
<<</Experiment Details>>>
<<<Baseline Models>>>
We implement three models to benchmark against. First is NWN-STEM (Algorithm 2 from BIBREF20). We use the training sets as the “reference review sets" to extract similar nouns to the $RE$ (using MINsim = 0.1). We then replace nouns in the text similar to the $RE$ with nouns extracted from the associated reference review set.
Secondly, we modify NWN-STEM to work for verbs and adjectives, and call this GWN-STEM. From the reference review sets, we extract similar nouns, verbs, and adjectives to the $RE$ (using MINsim = 0.1), where the $RE$ is now not restricted to being a noun. We replace nouns, verbs, and adjectives in the text similar to the $RE$ with those extracted from the associated reference review set.
Lastly, we implement W2V-STEM using Gensim BIBREF29. We train uni-gram Word2Vec models for single word $RE$s, and four-gram models for phrases. Models are trained on the training sets. We use cosine similarity to determine the most similar word/phrase in the input text to $RE$, which is the replaced $OE$. For all other words/phrases, we calculate $w_{i}^{\prime } = w_{i} - w_{OE} + w_{RE}$, where $w_{i}$ is the original word/phrase's embedding vector, $w_{OE}$ is the $OE$'s, $w_{RE}$ is the $RE$'s, and $w_{i}^{\prime }$ is the resulting embedding vector. The replacement word/phrase is $w_{i}^{\prime }$'s nearest neighbour. We use similarity thresholds to adjust replacement rates (RR) and produce text under various replacement rate thresholds (RRT).
<<</Baseline Models>>>
<<</Experiment>>>
<<<Evaluation>>>
<<<Evaluation Setup>>>
We manually select 10 nouns, 10 verbs, 10 adjectives, and 5 phrases from the top 10% most frequent words/phrases in each test set as our evaluation $RE$s. We filter the verbs and adjectives through a list of sentiment words BIBREF30 to ensure we do not choose $RE$s that would obviously significantly alter the text's sentiment.
For each evaluation $RE$, we choose one-hundred lines from the corresponding test set that does not already contain $RE$. We choose lines with at least five words, as many with less carry little semantic meaning (e.g. `Great!', `It is okay'). For Amazon and Yelp, we choose 50 positive and 50 negative lines per $RE$. We repeat this process three times, resulting in three sets of 1000 lines per dataset per POS (excluding phrases), and three sets of 500 lines per dataset for phrases. Our final results are averaged metrics over these three sets.
For SMERTI-Transformer, SMERTI-RNN, and W2V-STEM, we generate four outputs per text for MRT/RRT of 20%, 40%, 60%, and 80%, which represent upper-bounds on the percentage of the input that can be masked and/or replaced. Note that NWN-STEM and GWN-STEM can only evaluate on limited POS and their maximum replacement rates are limited. We select MINsim values of 0.075 and 0 for nouns and 0.1 and 0 for verbs, as these result in replacement rates approximately equal to the actual MR/RR of the other models' outputs for 20% and 40% MRT/RRT, respectively.
<<</Evaluation Setup>>>
<<<Key Evaluation Metrics>>>
Fluency (SLOR) We use syntactic log-odds ratio (SLOR) BIBREF31 for sentence level fluency and modify from their word-level formula to character-level ($SLOR_{c}$). We use Flair perplexity values from a language model trained on the One Billion Words corpus BIBREF32:
where $|S|$ and $|w|$ are the character lengths of the input text $S$ and the word $w$, respectively, $p_M(S)$ and $p_M(w)$ are the probabilities of $S$ and $w$ under the language model $M$, respectively, and $PPL_S$ and $PPL_w$ are the character-level perplexities of $S$ and $w$, respectively. SLOR (from hereon we refer to character-level SLOR as simply SLOR) measures aspects of text fluency such as grammaticality. Higher values represent higher fluency.
We rescale resulting SLOR values to the interval [0,1] by first fitting and normalizing a Gaussian distribution. We then truncate normalized data points outside [-3,3], which shifts approximately 0.69% of total data. Finally, we divide each data point by six and add 0.5 to each result.
Sentiment Preservation Accuracy (SPA) is defined as the percentage of outputs that carry the same sentiment as the input. We use VADER BIBREF33 to evaluate sentiment as positive, negative, or neutral. It handles typos, emojis, and other aspects of online text. Content Similarity Score (CSS) ranges from 0 to 1 and indicates the semantic similarity between generated text and the $RE$. A value closer to 1 indicates stronger semantic exchange, as the output is closer in semantic content to the $RE$. We also use the USE for this due to its design and strong performance as previously mentioned.
<<</Key Evaluation Metrics>>>
<<<Semantic Text Exchange Score (STES)>>>
We come up with a single score to evaluate overall performance of a model on STE that combines the key evaluation metrics. It uses the harmonic mean, similar to the F1 score (or F-score) BIBREF34, BIBREF35, and we call it the Semantic Text Exchange Score (STES):
where $A$ is SPA, $B$ is SLOR, and $C$ is CSS. STES ranges between 0 and 1, with scores closer to 1 representing higher overall performance. Like the F1 score, STES penalizes models which perform very poorly in one or more metrics, and favors balanced models achieving strong results in all three.
<<</Semantic Text Exchange Score (STES)>>>
<<<Automatic Evaluation Results>>>
Table TABREF38 shows overall average results by model. Table TABREF41 shows outputs for a Yelp example.
As observed from Table TABREF41 (see also Appendix F), SMERTI is able to generate high quality output text similar to the $RE$ while flowing better than other models' outputs. It can replace entire phrases and sentences due to its variable length infilling. Note that for nouns, the outputs from GWN-STEM and NWN-STEM are equivalent.
<<</Automatic Evaluation Results>>>
<<<Human Evaluation Setup>>>
We conduct a human evaluation with eight participants, 6 males and 2 females, that are affiliated project researchers aged 20-39 at the University of Waterloo. We randomly choose one evaluation line for a randomly selected word or phrase for each POS per dataset. The input text and each model's output (for 40% MRT/RRT - chosen as a good middle ground) for each line is presented to participants, resulting in a total of 54 pieces of text, and rated on the following criteria from 1-5:
RE Match: “How related is the entire text to the concept of [X]", where [X] is a word or phrase (1 - not at all related, 3 - somewhat related, 5 - very related). Note here that [X] is a given $RE$.
Fluency: “Does the text make sense and flow well?" (1 - not at all, 3 - somewhat, 5 - very)
Sentiment: “How do you think the author of the text was feeling?" (1 - very negative, 3 - neutral, 5 - very positive)
Each participant evaluates every piece of text. They are presented with a single piece of text at a time, with the order of models, POS, and datasets completely randomized.
<<</Human Evaluation Setup>>>
<<<Human Evaluation Results>>>
Average human evaluation scores are displayed in Table TABREF50. Sentiment Preservation (between 0 and 1) is calculated by comparing the average Sentiment rating for each model's output text to the Sentiment rating of the input text, and if both are less than 2.5 (negative), between 2.5 and 3.5 inclusive (neutral), or greater than 3.5 (positive), this is counted as a valid case of Sentiment Preservation. We repeat this for every evaluation line to calculate the final values per model. Harmonic means of all three metrics (using rescaled 0-1 values of RE Match and Fluency) are also displayed.
<<</Human Evaluation Results>>>
<<</Evaluation>>>
<<<Analysis>>>
<<<Performance by Model>>>
As seen in Table TABREF38, both SMERTI variations achieve higher STES and outperform the other models overall, with the WordNet models performing the worst. SMERTI excels especially on fluency and content similarity. The transformer variation achieves slightly higher SLOR, while the RNN variation achieves slightly higher CSS. The WordNet models perform strongest in sentiment preservation (SPA), likely because they modify little of the text and only verbs and nouns. They achieve by far the lowest CSS, likely in part due to this limited text replacement. They also do not account for context, and many words (e.g. proper nouns) do not exist in WordNet. Overall, the WordNet models are not very effective at STE.
W2V-STEM achieves the lowest SLOR, especially for higher RRT, as supported by the example in Table TABREF41 (see also Appendix F). W2V-STEM and WordNet models output grammatically incorrect text that flows poorly. In many cases, words are repeated multiple times. We analyze the average Type Token Ratio (TTR) values of each model's outputs, which is the ratio of unique divided by total words. As shown in Table TABREF52, the SMERTI variations achieve the highest TTR, while W2V-STEM and NWN-STEM the lowest.
Note that while W2V-STEM achieves lower CSS than SMERTI, it performs comparably in this aspect. This is likely due to its vector arithmetic operations algorithm, which replaces each word with one more similar to the RE. This is also supported by the lower TTR, as W2V-STEM frequently outputs the same words multiple times.
<<</Performance by Model>>>
<<<Performance By Model - Human Results>>>
As seen in Table TABREF50, the SMERTI variations outperform all baseline models overall, particularly in RE Match. SMERTI-Transformer performs the best, with SMERTI-RNN second. The WordNet models achieve high Sentiment Preservation, but much lower on RE Match. W2V-STEM achieves comparably high RE Match, but lowest Fluency.
These results correspond well with our automatic evaluation results in Table TABREF38. We look at the Pearson correlation values between RE Match, Fluency, and Sentiment Preservation with CSS, SLOR, and SPA, respectively. These are 0.9952, 0.9327, and 0.8768, respectively, demonstrating that our automatic metrics are highly effective and correspond well with human ratings.
<<</Performance By Model - Human Results>>>
<<<SMERTI's Performance By POS>>>
As seen from Table TABREF55 , SMERTI's SPA values are highest for nouns, likely because they typically carry little sentiment, and lowest for adjectives, likely because they typically carry the most.
SLOR is lowest for adjectives and highest for phrases and nouns. Adjectives typically carry less semantic meaning and SMERTI likely has more trouble figuring out how best to infill the text. In contrast, nouns typically carry more, and phrases the most (since they consist of multiple words).
SMERTI's CSS is highest for phrases then nouns, likely due to phrases and nouns carrying more semantic meaning, making it easier to generate semantically similar text. Both SMERTI's and the input text's CSS are lowest for adjectives, likely because they carry little semantic meaning.
Overall, SMERTI appears to be more effective on nouns and phrases than verbs and adjectives.
<<</SMERTI's Performance By POS>>>
<<<SMERTI's Performance By Dataset>>>
As seen in Table TABREF58, SMERTI's SPA is lowest for news headlines. Amazon and Yelp reviews naturally carry stronger sentiment, likely making it easier to generate text with similar sentiment.
Both SMERTI's and the input text's SLOR appear to be lower for Yelp reviews. This may be due to many reasons, such as more typos and emojis within the original reviews, and so forth.
SMERTI's CSS values are slightly higher for news headlines. This may be due to them typically being shorter and carrying more semantic meaning as they are designed to be attention grabbers.
Overall, it seems that using datasets which inherently carry more sentiment will lead to better sentiment preservation. Further, the quality of the dataset's original text, unsurprisingly, influences the ability of SMERTI to generate fluent text.
<<</SMERTI's Performance By Dataset>>>
<<<SMERTI's Performance By MRT/RRT>>>
From Table TABREF60, it can be seen that as MRT/RRT increases, SMERTI's SPA and SLOR decrease while CSS increases. These relationships are very strong as supported by the Pearson correlation values of -0.9972, -0.9183, and 0.9078, respectively. When SMERTI can alter more text, it has the opportunity to replace more related to sentiment while producing more of semantic similarity to the $RE$.
Further, SMERTI generates more of the text itself, becoming less similar to the human-written input, resulting in lower fluency. To further demonstrate this, we look at average SMERTI BLEU BIBREF36 scores against MRT/RRT, shown in Table TABREF60. BLEU generally indicates how close two pieces of text are in content and structure, with higher values indicating greater similarity. We report our final BLEU scores as the average scores of 1 to 4-grams. As expected, BLEU decreases as MRT/RRT increases, and this relationship is very strong as supported by the Pearson correlation value of -0.9960.
It is clear that MRT/RRT represents a trade-off between CSS against SPA and SLOR. It is thus an adjustable parameter that can be used to control the generated text, and balance semantic exchange against fluency and sentiment preservation.
<<</SMERTI's Performance By MRT/RRT>>>
<<</Analysis>>>
<<<Conclusion and Future Work>>>
We introduced the task of semantic text exchange (STE), demonstrated that our pipeline SMERTI performs well on STE, and proposed an STES metric for evaluating overall STE performance. SMERTI outperformed other models and was the most balanced overall. We also showed a trade-off between semantic exchange against fluency and sentiment preservation, which can be controlled by the masking (replacement) rate threshold.
Potential directions for future work include adding specific methods to control sentiment, and fine-tuning SMERTI for preservation of persona or personality. Experimenting with other text infilling models (e.g. fine-tuning BERT BIBREF8) is also an area of exploration. Lastly, our human evaluation is limited in size and a larger and more diverse participant pool is needed.
We conclude by addressing potential ethical misuses of STE, including assisting in the generation of spam and fake-reviews/news. These risks come with any intelligent chatbot work, but we feel that the benefits, including usage in the detection of misuse such as fake-news, greatly outweigh the risks and help progress NLP and AI research.
<<</Conclusion and Future Work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nWord and Sentence-level Embeddings\nText Infilling\nStyle and Sentiment Transfer\nReview Generation\nSMERTI\nOverview\nEntity Replacement Module (ERM)\nStanford Parser\nUniversal Sentence Encoder (USE)\nSimilarity Masking Module (SMM)\nText Infilling Module (TIM)\nBidirectional RNN with Attention\nTransformer\nExperiment\nDatasets\nExperiment Details\nBaseline Models\nEvaluation\nEvaluation Setup\nKey Evaluation Metrics\nSemantic Text Exchange Score (STES)\nAutomatic Evaluation Results\nHuman Evaluation Setup\nHuman Evaluation Results\nAnalysis\nPerformance by Model\nPerformance By Model - Human Results\nSMERTI's Performance By POS\nSMERTI's Performance By Dataset\nSMERTI's Performance By MRT/RRT\nConclusion and Future Work"
],
"type": "outline"
}
|
1911.03385
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Low-Level Linguistic Controls for Style Transfer and Content Preservation
<<<Abstract>>>
Despite the success of style transfer in image processing, it has seen limited progress in natural language generation. Part of the problem is that content is not as easily decoupled from style in the text domain. Curiously, in the field of stylometry, content does not figure prominently in practical methods of discriminating stylistic elements, such as authorship and genre. Rather, syntax and function words are the most salient features. Drawing on this work, we model style as a suite of low-level linguistic controls, such as frequency of pronouns, prepositions, and subordinate clause constructions. We train a neural encoder-decoder model to reconstruct reference sentences given only content words and the setting of the controls. We perform style transfer by keeping the content words fixed while adjusting the controls to be indicative of another style. In experiments, we show that the model reliably responds to the linguistic controls and perform both automatic and manual evaluations on style transfer. We find we can fool a style classifier 84% of the time, and that our model produces highly diverse and stylistically distinctive outputs. This work introduces a formal, extendable model of style that can add control to any neural text generation system.
<<</Abstract>>>
<<<Introduction>>>
All text has style, whether it be formal or informal, polite or aggressive, colloquial, persuasive, or even robotic. Despite the success of style transfer in image processing BIBREF0, BIBREF1, there has been limited progress in the text domain, where disentangling style from content is particularly difficult.
To date, most work in style transfer relies on the availability of meta-data, such as sentiment, authorship, or formality. While meta-data can provide insight into the style of a text, it often conflates style with content, limiting the ability to perform style transfer while preserving content. Generalizing style transfer requires separating style from the meaning of the text itself. The study of literary style can guide us. For example, in the digital humanities and its subfield of stylometry, content doesn't figure prominently in practical methods of discriminating authorship and genres, which can be thought of as style at the level of the individual and population, respectively. Rather, syntactic and functional constructions are the most salient features.
In this work, we turn to literary style as a test-bed for style transfer, and build on work from literature scholars using computational techniques for analysis. In particular we draw on stylometry: the use of surface level features, often counts of function words, to discriminate between literary styles. Stylometry first saw success in attributing authorship to the disputed Federalist Papers BIBREF2, but is recently used by scholars to study things such as the birth of genres BIBREF3 and the change of author styles over time BIBREF4. The use of function words is likely not the way writers intend to express style, but they appear to be downstream realizations of higher-level stylistic decisions.
We hypothesize that surface-level linguistic features, such as counts of personal pronouns, prepositions, and punctuation, are an excellent definition of literary style, as borne out by their use in the digital humanities, and our own style classification experiments. We propose a controllable neural encoder-decoder model in which these features are modelled explicitly as decoder feature embeddings. In training, the model learns to reconstruct a text using only the content words and the linguistic feature embeddings. We can then transfer arbitrary content words to a new style without parallel data by setting the low-level style feature embeddings to be indicative of the target style.
This paper makes the following contributions:
A formal model of style as a suite of controllable, low-level linguistic features that are independent of content.
An automatic evaluation showing that our model fools a style classifier 84% of the time.
A human evaluation with English literature experts, including recommendations for dealing with the entanglement of content with style.
<<</Introduction>>>
<<<Related Work>>>
<<<Style Transfer with Parallel Data>>>
Following in the footsteps of machine translation, style transfer in text has seen success by using parallel data. BIBREF5 use modern translations of Shakespeare plays to build a modern-to-Shakespearan model. BIBREF6 compile parallel data for formal and informal sentences, allowing them to successfully use various machine translation techniques. While parallel data may work for very specific styles, the difficulty of finding parallel texts dramatically limits this approach.
<<</Style Transfer with Parallel Data>>>
<<<Style Transfer without Parallel Data>>>
There has been a decent amount of work on this approach in the past few years BIBREF7, BIBREF8, mostly focusing on variations of an encoder-decoder framework in which style is modeled as a monolithic style embedding. The main obstacle is often to disentangle style and content. However, it remains a challenging problem.
Perhaps the most successful is BIBREF9, who use a de-noising auto encoder and back translation to learn style without parallel data. BIBREF10 outline the benefits of automatically extracting style, and suggest there is a formal weakness of using linguistic heuristics. In contrast, we believe that monolithic style embeddings don't capture the existing knowledge we have about style, and will struggle to disentangle content.
<<</Style Transfer without Parallel Data>>>
<<<Controlling Linguistic Features>>>
Several papers have worked on controlling style when generating sentences from restaurant meaning representations BIBREF11, BIBREF12. In each of these cases, the diversity in outputs is quite small given the constraints of the meaning representation, style is often constrained to interjections (like “yeah”), and there is no original style from which to transfer.
BIBREF13 investigate using stylistic parameters and content parameters to control text generation using a movie review dataset. Their stylistic parameters are created using word-level heuristics and they are successful in controlling these parameters in the outputs. Their success bodes well for our related approach in a style transfer setting, in which the content (not merely content parameters) is held fixed.
<<</Controlling Linguistic Features>>>
<<<Stylometry and the Digital Humanities>>>
Style, in literary research, is anything but a stable concept, but it nonetheless has a long tradition of study in the digital humanities. In a remarkably early quantitative study of literature, BIBREF14 charts sentence-level stylistic attributes specific to a number of novelists. Half a century later, BIBREF15 builds on earlier work in information theory by BIBREF16, and defines a literary text as consisting of two “materials": “the vocabulary, and some structural properties, the style, of its author."
Beginning with BIBREF2, statistical approaches to style, or stylometry, join the already-heated debates over the authorship of literary works. A noteable example of this is the “Delta" measure, which uses z-scores of function word frequencies BIBREF17. BIBREF18 find that Shakespeare added some material to a later edition of Thomas Kyd's The Spanish Tragedy, and that Christopher Marlowe collaborated with Shakespeare on Henry VI.
<<</Stylometry and the Digital Humanities>>>
<<</Related Work>>>
<<<Models>>>
<<<Preliminary Classification Experiments>>>
The stylometric research cited above suggests that the most frequently used words, e.g. function words, are most discriminating of authorship and literary style. We investigate these claims using three corpora that have distinctive styles in the literary community: gothic novels, philosophy books, and pulp science fiction, hereafter sci-fi.
We retrieve gothic novels and philosophy books from Project Gutenberg and pulp sci-fi from Internet Archive's Pulp Magazine Archive. We partition this corpus into train, validation, and test sets the sizes of which can be found in Table TABREF12.
In order to validate the above claims, we train five different classifiers to predict the literary style of sentences from our corpus. Each classifier has gradually more content words replaced with part-of-speech (POS) tag placeholder tokens. The All model is trained on sentences with all proper nouns replaced by `PROPN'. The models Ablated N, Ablated NV, and Ablated NVA replace nouns, nouns & verbs, and nouns, verbs, & adjectives with the corresponding POS tag respectively. Finally, Content-only is trained on sentences with all words that are not tagged as NOUN, VERB, ADJ removed; the remaining words are not ablated.
We train the classifiers on the training set, balancing the class distribution to make sure there are the same number of sentences from each style. Classifiers are trained using fastText BIBREF19, using tri-gram features with all other settings as default. table:classifiers shows the accuracies of the classifiers.
The styles are highly distinctive: the All classifier has an accuracy of 86%. Additionally, even the Ablated NVA is quite successful, with 75% accuracy, even without access to any content words. The Content only classifier is also quite successful, at 80% accuracy. This indicates that these stylistic genres are distinctive at both the content level and at the syntactic level.
<<</Preliminary Classification Experiments>>>
<<<Formal Model of Style>>>
Given that non-content words are distinctive enough for a classifier to determine style, we propose a suite of low-level linguistic feature counts (henceforth, controls) as our formal, content-blind definition of style. The style of a sentence is represented as a vector of counts of closed word classes (like personal pronouns) as well as counts of syntactic features like the number of SBAR non-terminals in its constituency parse, since clause structure has been shown to be indicative of style BIBREF20. Controls are extracted heuristically, and almost all rely on counts of pre-defined word lists. For constituency parses we use the Stanford Parser BIBREF21. table:controlexamples lists all the controls along with examples.
<<<Reconstruction Task>>>
Models are trained with a reconstruction task, in which a distorted version of a reference sentence is input and the goal is to output the original reference.
fig:sentenceinput illustrates the process. Controls are calculated heuristically. All words found in the control word lists are then removed from the reference sentence. The remaining words, which represent the content, are used as input into the model, along with their POS tags and lemmas.
In this way we encourage models to construct a sentence using content and style independently. This will allow us to vary the stylistic controls while keeping the content constant, and successfully perform style transfer. When generating a new sentence, the controls correspond to the counts of the corresponding syntactic features that we expect to be realized in the output.
<<</Reconstruction Task>>>
<<</Formal Model of Style>>>
<<<Neural Architecture>>>
We implement our feature controlled language model using a neural encoder-decoder with attention BIBREF22, using 2-layer uni-directional gated recurrent units (GRUs) for the encoder and decoder BIBREF23.
The input to the encoder is a sequence of $M$ content words, along with their lemmas, and fine and coarse grained part-of-speech (POS) tags, i.e. $X_{.,j} = (x_{1,j},\ldots ,x_{M,j})$ for $j \in \mathcal {T} = \lbrace \textrm {word, lemma, fine-pos, coarse-pos}\rbrace $. We embed each token (and its lemma and POS) before concatenating, and feeding into the encoder GRU to obtain encoder hidden states, $ c_i = \operatorname{gru}(c_{i-1}, \left[E_j(X_{i,j}), \; j\in \mathcal {T} \right]; \omega _{enc}) $ for $i \in {1,\ldots ,M},$ where initial state $c_0$, encoder GRU parameters $\omega _{enc}$ and embedding matrices $E_j$ are learned parameters.
The decoder sequentially generates the outputs, i.e. a sequence of $N$ tokens $y =(y_1,\ldots ,y_N)$, where all tokens $y_i$ are drawn from a finite output vocabulary $\mathcal {V}$. To generate the each token we first embed the previously generated token $y_{i-1}$ and a vector of $K$ control features $z = ( z_1,\ldots , z_K)$ (using embedding matrices $E_{dec}$ and $E_{\textrm {ctrl-1}}, \ldots , E_{\textrm {ctrl-K}}$ respectively), before concatenating them into a vector $\rho _i,$ and feeding them into the decoder side GRU along with the previous decoder state $h_{i-1}$:
where $\omega _{dec}$ are the decoder side GRU parameters.
Using the decoder hidden state $h_i$ we then attend to the encoder context vectors $c_j$, computing attention scores $\alpha _{i,j}$, where
before passing $h_i$ and the attention weighted context $\bar{c}_i=\sum _{j=1}^M \alpha _{i,j} c_j$ into a single hidden-layer perceptron with softmax output to compute the next token prediction probability,
where $W,U,V$ and $u,v, \nu $ are parameter matrices and vectors respectively.
Crucially, the controls $z$ remain fixed for all input decoder steps. Each $z_k$ represents the frequency of one of the low-level features described in sec:formalstyle. During training on the reconstruction task, we can observe the full output sequence $y,$ and so we can obtain counts for each control feature directly. Controls receive a different embedding depending on their frequency, where counts of 0-20 each get a unique embedding, and counts greater than 20 are assigned to the same embedding. At test time, we set the values of the controls according to procedure described in Section SECREF25.
We use embedding sizes of 128, 128, 64, and 32 for token, lemma, fine, and coarse grained POS embedding matrices respectively. Output token embeddings $E_{dec}$ have size 512, and 50 for the control feature embeddings. We set 512 for all GRU and perceptron output sizes. We refer to this model as the StyleEQ model. See fig:model for a visual depiction of the model.
<<<Baseline Genre Model>>>
We compare the above model to a similar model, where rather than explicitly represent $K$ features as input, we have $K$ features in the form of a genre embedding, i.e. we learn a genre specific embedding for each of the gothic, scifi, and philosophy genres, as studied in BIBREF8 and BIBREF7. To generate in a specific style, we simply set the appropriate embedding. We use genre embeddings of size 850 which is equivalent to the total size of the $K$ feature embeddings in the StyleEQ model.
<<</Baseline Genre Model>>>
<<<Training>>>
We train both models with minibatch stochastic gradient descent with a learning rate of 0.25, weight decay penalty of 0.0001, and batch size of 64. We also apply dropout with a drop rate of 0.25 to all embedding layers, the GRUs, and preceptron hidden layer. We train for a maximum of 200 epochs, using validation set BLEU score BIBREF26 to select the final model iteration for evaluation.
<<</Training>>>
<<<Selecting Controls for Style Transfer>>>
In the Baseline model, style transfer is straightforward: given an input sentence in one style, fix the encoder content features while selecting a different genre embedding. In contrast, the StyleEQ model requires selecting the counts for each control. Although there are a variety of ways to do this, we use a method that encourages a diversity of outputs.
In order to ensure the controls match the reference sentence in magnitude, we first find all sentences in the target style with the same number of words as the reference sentence. Then, we add the following constraints: the same number of proper nouns, the same number of nouns, the same number of verbs, and the same number of adjectives. We randomly sample $n$ of the remaining sentences, and for each of these `sibling' sentences, we compute the controls. For each of the new controls, we generate a sentence using the original input sentence content features. The generated sentences are then reranked using the length normalized log-likelihood under the model. We can then select the highest scoring sentence as our style-transferred output, or take the top-$k$ when we need a diverse set of outputs.
The reason for this process is that although there are group-level distinctive controls for each style, e.g. the high use of punctuation in philosophy books or of first person pronouns in gothic novels, at the sentence level it can understandably be quite varied. This method matches sentences between styles, capturing the natural distribution of the corpora.
<<</Selecting Controls for Style Transfer>>>
<<</Neural Architecture>>>
<<</Models>>>
<<<Automatic Evaluations>>>
<<<BLEU Scores & Perplexity>>>
In tab:blueperpl we report BLEU scores for the reconstruction of test set sentences from their content and feature representations, as well as the model perplexities of the reconstruction. For both models, we use beam decoding with a beam size of eight. Beam candidates are ranked according to their length normalized log-likelihood. On these automatic measures we see that StyleEQ is better able to reconstruct the original sentences. In some sense this evaluation is mostly a sanity check, as the feature controls contain more locally specific information than the genre embeddings, which say very little about how many specific function words one should expect to see in the output.
<<</BLEU Scores & Perplexity>>>
<<<Feature Control>>>
Designing controllable language models is often difficult because of the various dependencies between tokens; when changing one control value it may effect other aspects of the surface realization. For example, increasing the number of conjunctions may effect how the generator places prepositions to compensate for structural changes in the sentence. Since our features are deterministically recoverable, we can perturb an individual control value and check to see that the desired change was realized in the output. Moreover, we can check the amount of change in the other non-perturbed features to measure the independence of the controls.
We sample 50 sentences from each genre from the test set. For each sample, we create a perturbed control setting for each control by adding $\delta $ to the original control value. This is done for $\delta \in \lbrace -3, -2, -1, 0, 1, 2, 3\rbrace $, skipping any settings where the new control value would be negative.
table:autoeval:ctrl shows the results of this experiment. The Exact column displays the percentage of generated texts that realize the exact number of control features specified by the perturbed control. High percentages in the Exact column indicate greater one-to-one correspondence between the control and surface realization. For example, if the input was “Dracula and Frankenstein and the mummy,” and we change the conjunction feature by $\delta =-1$, an output of “Dracula, Frankenstein and the mummy,” would count towards the Exact category, while “Dracula, Frankenstein, the mummy,” would not.
The Direction column specifies the percentage of cases where the generated text produces a changed number of the control features that, while not exactly matching the specified value of the perturbed control, does change from the original in the correct direction. For example, if the input again was “Dracula and Frankenstein and the mummy,” and we change the conjunction feature by $\delta =-1$, both outputs of “Dracula, Frankenstein and the mummy,” and “Dracula, Frankenstein, the mummy,” would count towards Direction. High percentages in Direction mean that we could roughly ensure desired surface realizations by modifying the control by a larger $\delta $.
Finally, the Atomic column specifies the percentage of cases where the generated text with the perturbed control only realizes changes to that specific control, while other features remain constant. For example, if the input was “Dracula and Frankenstein in the castle,” and we set the conjunction feature to $\delta =-1$, an output of “Dracula near Frankenstein in the castle,” would not count as Atomic because, while the number of conjunctions did decrease by one, the number of simple preposition changed. An output of “Dracula, Frankenstein in the castle,” would count as Atomic. High percentages in the Atomic column indicate this feature is only loosely coupled to the other features and can be changed without modifying other aspects of the sentence.
Controls such as conjunction, determiner, and punctuation are highly controllable, with Exact rates above 80%. But with the exception of the constituency parse features, all controls have high Direction rates, many in the 90s. These results indicate our model successfully controls these features. The fact that the Atomic rates are relatively low is to be expected, as controls are highly coupled – e.g. to increase 1stPer, it is likely another pronoun control will have to decrease.
<<</Feature Control>>>
<<<Automatic Classification>>>
For each model we look at the classifier prediction accuracy of reconstructed and transferred sentences. In particular we use the Ablated NVA classifier, as this is the most content-blind one.
We produce 16 outputs from both the Baseline and StyleEq models. For the Baseline, we use a beam search of size 16. For the StyleEQ model, we use the method described in Section SECREF25 to select 16 `sibling' sentences in the target style, and generated a transferred sentence for each. We look at three different methods for selection: all, which uses all output sentences; top, which selects the top ranked sentence based on the score from the model; and oracle, which selects the sentence with the highest classifier likelihood for the intended style.
The reason for the third method, which indeed acts as an oracle, is that using the score from the model didn't always surface a transferred sentence that best reflected the desired style. Partially this was because the model score was mostly a function of how well a transferred sentence reflected the distribution of the training data. But additionally, some control settings are more indicative of a target style than others. The use of the classifier allows us to identify the most suitable control setting for a target style that was roughly compatible with the number of content words.
In table:fasttext-results we see the results. Note that for both models, the all and top classification accuracy tends to be quite similar, though for the Baseline they are often almost exactly the same when the Baseline has little to no diversity in the outputs.
However, the oracle introduces a huge jump in accuracy for the StyleEQ model, especially compared to the Baseline, partially because the diversity of outputs from StyleEQ is much higher; often the Baseline model produces no diversity – the 16 output sentences may be nearly identical, save a single word or two. It's important to note that neither model uses the classifier in any way except to select the sentence from 16 candidate outputs.
What this implies is that lurking within the StyleEQ model outputs are great sentences, even if they are hard to find. In many cases, the StyleEQ model has a classification accuracy above the base rate from the test data, which is 75% (see table:classifiers).
<<</Automatic Classification>>>
<<</Automatic Evaluations>>>
<<<Human Evaluation>>>
table:cherrypicking shows example outputs for the StyleEQ and Baseline models. Through inspection we see that the StyleEQ model successfully changes syntactic constructions in stylistically distinctive ways, such as increasing syntactic complexity when transferring to philosophy, or changing relevant pronouns when transferring to sci-fi. In contrast, the Baseline model doesn't create outputs that move far from the reference sentence, making only minor modifications such changing the type of a single pronoun.
To determine how readers would classify our transferred sentences, we recruited three English Literature PhD candidates, all of whom had passed qualifying exams that included determining both genre and era of various literary texts.
<<<Fluency Evaluation>>>
To evaluate the fluency of our outputs, we had the annotators score reference sentences, reconstructed sentences, and transferred sentences on a 0-5 scale, where 0 was incoherent and 5 was a well-written human sentence.
table:fluency shows the average fluency of various conditions from all three annotators. Both models have fluency scores around 3. Upon inspection of the outputs, it is clear that many have fluency errors, resulting in ungrammatical sentences.
Notably the Baseline often has slightly higher fluency scores than the StyleEQ model. This is likely because the Baseline model is far less constrained in how to construct the output sentence, and upon inspection often reconstructs the reference sentence even when performing style transfer. In contrast, the StyleEQ is encouraged to follow the controls, but can struggle to incorporate these controls into a fluent sentence.
The fluency of all outputs is lower than desired. We expect that incorporating pre-trained language models would increase the fluency of all outputs without requiring larger datasets.
<<</Fluency Evaluation>>>
<<<Human Classification>>>
Each annotator annotated 90 reference sentences (i.e. from the training corpus) with which style they thought the sentence was from. The accuracy on this baseline task for annotators A1, A2, and A3 was 80%, 88%, and 80% respectively, giving us an upper expected bound on the human evaluation.
In discussing this task with the annotators, they noted that content is a heavy predictor of genre, and that would certainly confound their annotations. To attempt to mitigate this, we gave them two annotation tasks: which-of-3 where they simply marked which style they thought a sentence was from, and which-of-2 where they were given the original style and marked which style they thought the sentence was transferred into.
For each task, each annotator marked 180 sentences: 90 from each model, with an even split across the three genres. Annotators were presented the sentences in a random order, without information about the models. In total, each marked 270 sentences. (Note there were no reconstructions in this annotation task.)
table:humanclassifiers shows the results. In both tasks, accuracy of annotators classifying the sentence as its intended style was low. In which-of-3, scores were around 20%, below the chance rate of 33%. In which-of-2, scores were in the 50s, slightly above the chance rate of 50%. This was the case for both models. There was a slight increase in accuracy for the StyleEQ model over the Baseline for which-of-3, but the opposite trend for which-of-2, suggesting these differences are not significant.
It's clear that it's hard to fool the annotators. Introspecting on their approach, the annotators expressed having immediate responses based on key words – for instance any references of `space' implied `sci-fi'. We call this the `vampires in space' problem, because no matter how well a gothic sentence is rewritten as a sci-fi one, it's impossible to ignore the fact that there is a vampire in space. The transferred sentences, in the eyes of the Ablated NVA classifier (with no access to content words), did quite well transferring into their intended style. But people are not blind to content.
<<</Human Classification>>>
<<<The `Vampires in Space' Problem>>>
Working with the annotators, we regularly came up against the 'vampires in space' problem: while syntactic constructions account for much of the distinction of literary styles, these constructions often co-occur with distinctive content.
Stylometrics finds syntactic constructions are great at fingerprinting, but suggests that these constructions are surface realizations of higher-level stylistic decisions. The number and type of personal pronouns is a reflection of how characters feature in a text. A large number of positional prepositions may be the result of a writer focusing on physical descriptions of scenes. In our attempt to decouple these, we create Frankenstein sentences, which piece together features of different styles – we are putting vampires in space.
Another way to validate our approach would be to select data that is stylistically distinctive but with similar content: perhaps genres in which content is static but language use changes over time, stylistically distinct authors within a single genre, or parodies of a distinctive genre.
<<</The `Vampires in Space' Problem>>>
<<</Human Evaluation>>>
<<<Conclusion and Future Work>>>
We present a formal, extendable model of style that can add control to any neural text generation system. We model style as a suite of low-level linguistic controls, and train a neural encoder-decoder model to reconstruct reference sentences given only content words and the setting of the controls. In automatic evaluations, we show that our model can fool a style classifier 84% of the time and outperforms a baseline genre-embedding model. In human evaluations, we encounter the `vampires in space' problem in which content and style are equally discriminative but people focus more on the content.
In future work we would like to model higher-level syntactic controls. BIBREF20 show that differences in clausal constructions, for instance having a dependent clause before an independent clause or vice versa, is a marker of style appreciated by the reader. Such features would likely interact with our lower-level controls in an interesting way, and provide further insight into style transfer in text.
<<</Conclusion and Future Work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nStyle Transfer with Parallel Data\nStyle Transfer without Parallel Data\nControlling Linguistic Features\nStylometry and the Digital Humanities\nModels\nPreliminary Classification Experiments\nFormal Model of Style\nReconstruction Task\nNeural Architecture\nBaseline Genre Model\nTraining\nSelecting Controls for Style Transfer\nAutomatic Evaluations\nBLEU Scores & Perplexity\nFeature Control\nAutomatic Classification\nHuman Evaluation\nFluency Evaluation\nHuman Classification\nThe `Vampires in Space' Problem\nConclusion and Future Work"
],
"type": "outline"
}
|
2001.07209
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Text-based inference of moral sentiment change
<<<Abstract>>>
We present a text-based framework for investigating moral sentiment change of the public via longitudinal corpora. Our framework is based on the premise that language use can inform people's moral perception toward right or wrong, and we build our methodology by exploring moral biases learned from diachronic word embeddings. We demonstrate how a parameter-free model supports inference of historical shifts in moral sentiment toward concepts such as slavery and democracy over centuries at three incremental levels: moral relevance, moral polarity, and fine-grained moral dimensions. We apply this methodology to visualizing moral time courses of individual concepts and analyzing the relations between psycholinguistic variables and rates of moral sentiment change at scale. Our work offers opportunities for applying natural language processing toward characterizing moral sentiment change in society.
<<</Abstract>>>
<<<Moral sentiment change and language>>>
People's moral sentiment—our feelings toward right or wrong—can change over time. For instance, the public's views toward slavery have shifted substantially over the past centuries BIBREF0. How society's moral views evolve has been a long-standing issue and a constant source of controversy subject to interpretations from social scientists, historians, philosophers, among others. Here we ask whether natural language processing has the potential to inform moral sentiment change in society at scale, involving minimal human labour or intervention.
The topic of moral sentiment has been thus far considered a traditional inquiry in philosophy BIBREF1, BIBREF2, BIBREF3, with contemporary development of this topic represented in social psychology BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, cognitive linguistics BIBREF9, and more recently, the advent of Moral Foundations Theory BIBREF10, BIBREF11, BIBREF12. Despite the fundamental importance and interdisciplinarity of this topic, large-scale formal treatment of moral sentiment, particularly its evolution, is still in infancy from the natural language processing (NLP) community (see overview in Section SECREF2).
We believe that there is a tremendous potential to bring NLP methodologies to bear on the problem of moral sentiment change. We build on extensive recent work showing that word embeddings reveal implicit human biases BIBREF13, BIBREF14 and social stereotypes BIBREF15. Differing from this existing work, we demonstrate that moral sentiment change can be revealed by moral biases implicitly learned from diachronic text corpora. Accordingly, we present to our knowledge the first text-based framework for probing moral sentiment change at a large scale with support for different levels of analysis concerning moral relevance, moral polarity, and fine-grained moral dimensions. As such, for any query item such as slavery, our goal is to automatically infer its moral trajectories from sentiments at each of these levels over a long period of time.
Our approach is based on the premise that people's moral sentiments are reflected in natural language, and more specifically, in text BIBREF16. In particular, we know that books are highly effective tools for conveying moral views to the public. For example, Uncle Tom's Cabin BIBREF17 was central to the anti-slavery movement in the United States. The framework that we develop builds on this premise to explore changes in moral sentiment reflected in longitudinal or historical text.
Figure FIGREF1 offers a preview of our framework by visualizing the evolution trajectories of the public's moral sentiment toward concepts signified by the probe words slavery, democracy, and gay. Each of these concepts illustrates a piece of “moral history” tracked through a period of 200 years (1800 to 2000), and our framework is able to capture nuanced moral changes. For instance, slavery initially lies at the border of moral virtue (positive sentiment) and vice (negative sentiment) in the 1800s yet gradually moves toward the center of moral vice over the 200-year period; in contrast, democracy considered morally negative (e.g., subversion and anti-authority under monarchy) in the 1800s is now perceived as morally positive, as a mechanism for fairness; gay, which came to denote homosexuality only in the 1930s BIBREF18, is inferred to be morally irrelevant until the modern day. We will describe systematic evaluations and applications of our framework that extend beyond these anecdotal cases of moral sentiment change.
The general text-based framework that we propose consists of a parameter-free approach that facilitates the prediction of public moral sentiment toward individual concepts, automated retrieval of morally changing concepts, and broad-scale psycholinguistic analyses of historical rates of moral sentiment change. We provide a description of the probabilistic models and data used, followed by comprehensive evaluations of our methodology.
<<</Moral sentiment change and language>>>
<<<Emerging NLP research on morality>>>
An emerging body of work in natural language processing and computational social science has investigated how NLP systems can detect moral sentiment in online text. For example, moral rhetoric in social media and political discourse BIBREF19, BIBREF20, BIBREF21, the relation between moralization in social media and violent protests BIBREF22, and bias toward refugees in talk radio shows BIBREF23 have been some of the topics explored in this line of inquiry. In contrast to this line of research, the development of a formal framework for moral sentiment change is still under-explored, with no existing systematic and formal treatment of this topic BIBREF16.
While there is emerging awareness of ethical issues in NLP BIBREF24, BIBREF25, work exploiting NLP techniques to study principles of moral sentiment change is scarce. Moreover, since morality is variable across cultures and time BIBREF12, BIBREF16, developing systems that capture the diachronic nature of moral sentiment will be a pivotal research direction. Our work leverages and complements existing research that finds implicit human biases from word embeddings BIBREF13, BIBREF14, BIBREF19 by developing a novel perspective on using NLP methodology to discover principles of moral sentiment change in human society.
<<</Emerging NLP research on morality>>>
<<<A three-tier modelling framework>>>
Our framework treats the moral sentiment toward a concept at three incremental levels, as illustrated in Figure FIGREF3. First, we consider moral relevance, distinguishing between morally irrelevant and morally relevant concepts. At the second tier, moral polarity, we further split morally relevant concepts into those that are positively or negatively perceived in the moral domain. Finally, a third tier classifies these concepts into fine-grained categories of human morality.
We draw from research in social psychology to inform our methodology, most prominently Moral Foundations Theory BIBREF26. MFT seeks to explain the structure and variation of human morality across cultures, and proposes five moral foundations: Care / Harm, Fairness / Cheating, Loyalty / Betrayal, Authority / Subversion, and Sanctity / Degradation. Each foundation is summarized by a positive and a negative pole, resulting in ten fine-grained moral categories.
<<<Lexical data for moral sentiment>>>
To ground moral sentiment in text, we leverage the Moral Foundations Dictionary BIBREF27. The MFD is a psycholinguistic resource that associates each MFT category with a set of seed words, which are words that provide evidence for the corresponding moral category in text. We use the MFD for moral polarity classification by dividing seed words into positive and negative sets, and for fine-grained categorization by splitting them into the 10 MFT categories.
To implement the first tier of our framework and detect moral relevance, we complement our morally relevant seed words with a corresponding set of seed words approximating moral irrelevance based on the notion of valence, i.e., the degree of pleasantness or unpleasantness of a stimulus. We refer to the emotional valence ratings collected by BIBREF28 for approximately 14,000 English words, and choose the words with most neutral valence rating that do not occur in the MFD as our set of morally irrelevant seed words, for an equal total number of morally relevant and morally irrelevant words.
<<</Lexical data for moral sentiment>>>
<<<Models>>>
We propose and evaluate a set of probabilistic models to classify concepts in the three tiers of morality specified above. Our models exploit the semantic structure of word embeddings BIBREF29 to perform tiered moral classification of query concepts. In each tier, the model receives a query word embedding vector $\mathbf {q}$ and a set of seed words for each class in that tier, and infers the posterior probabilities over the set of classes $c$ to which the query concept is associated with.
The seed words function as “labelled examples” that guide the moral classification of novel concepts, and are organized per classification tier as follows. In moral relevance classification, sets $\mathbf {S}_0$ and $\mathbf {S}_1$ contain the morally irrelevant and morally relevant seed words, respectively; for moral polarity, $\mathbf {S}_+$ and $\mathbf {S}_-$ contain the positive and negative seed words; and for fine-grained moral categories, $\mathbf {S}_1, \ldots , \mathbf {S}_{10}$ contain the seed words for the 10 categories of MFT. Then our general problem is to estimate $p(c\,|\,\mathbf {q})$, where $\mathbf {q}$ is a query vector and $c$ is a moral category in the desired tier.
We evaluate the following four models:
A Centroid model summarizes each set of seed words by its expected vector in embedding space, and classifies concepts into the class of closest expected embedding in Euclidean distance following a softmax rule;
A Naïve Bayes model considers both mean and variance, under the assumption of independence among embedding dimensions, by fitting a normal distribution with mean vector and diagonal covariance matrix to the set of seed words of each class;
A $k$-Nearest Neighbors ($k$NN) model exploits local density estimation and classifies concepts according to the majority vote of the $k$ seed words closest to the query vector;
A Kernel Density Estimation (KDE) model performs density estimation at a broader scale by considering the contribution of each seed word toward the total likelihood of each class, regulated by a bandwidth parameter $h$ that controls the sensitivity of the model to distance in embedding space.
Table TABREF2 specifies the formulation of each model. Note that we adopt a parsimonious design principle in our modelling: both Centroid and Naïve Bayes are parameter-free models, $k$NN only depends on the choice of $k$, and KDE uses a single bandwidth parameter $h$.
<<</Models>>>
<<</A three-tier modelling framework>>>
<<<Historical corpus data>>>
To apply our models diachronically, we require a word embedding space that captures the meanings of words at different points in time and reflects changes pertaining to a particular word as diachronic shifts in a common embedding space.
Following BIBREF30, we combine skip-gram word embeddings BIBREF29 trained on longitudinal corpora of English with rotational alignments of embedding spaces to obtain diachronic word embeddings that are aligned through time.
We divide historical time into decade-long bins, and use two sets of embeddings provided by BIBREF30, each trained on a different historical corpus of English:
Google N-grams BIBREF31: a corpus of $8.5 \times 10^{11}$ tokens collected from the English literature (Google Books, all-genres) spanning the period 1800–1999.
COHA BIBREF32: a smaller corpus of $4.1 \times 10^8$ tokens from works selected so as to be genre-balanced and representative of American English in the period 1810–2009.
<<</Historical corpus data>>>
<<<Model evaluations>>>
We evaluated our models in two ways: classification of moral seed words on all three tiers (moral relevance, polarity, and fine-grained categories), and correlation of model predictions with human judgments.
<<<Moral sentiment inference of seed words>>>
In this evaluation, we assessed the ability of our models to classify the seed words that compose our moral environment in a leave-one-out classification task. We performed the evaluation for all three classification tiers: 1) moral relevance, where seed words are split into morally relevant and morally irrelevant; 2) moral polarity, where moral seed words are split into positive and negative; 3) fine-grained categories, where moral seed words are split into the 10 MFT categories. In each test, we removed one seed word from the training set at a time to obtain cross-validated model predictions.
Table TABREF14 shows classification accuracy for all models and corpora on each tier for the 1990–1999 period. We observe that all models perform substantially better than chance, confirming the efficacy of our methodology in capturing moral dimensions of words. We also observe that models using word embeddings trained on Google N-grams perform better than those trained on COHA, which could be expected given the larger corpus size of the former.
In the remaining analyses, we employ the Centroid model, which offers competitive accuracy and a simple, parameter-free specification.
<<</Moral sentiment inference of seed words>>>
<<<Alignment with human valence ratings>>>
We evaluated the approximate agreement between our methodology and human judgments using valence ratings, i.e., the degree of pleasantness or unpleasantness of a stimulus. Our assumption is that the valence of a concept should correlate with its perceived moral polarity, e.g., morally repulsive ideas should evoke an unpleasant feeling. However, we do not expect this correspondence to be perfect; for example, the concept of dessert evokes a pleasant reaction without being morally relevant.
In this analysis, we took the valence ratings for the nearly 14,000 English nouns collected by BIBREF28 and, for each query word $q$, we generated a corresponding prediction of positive moral polarity from our model, $P(c_+\,|\,\mathbf {q})$. Table TABREF16 shows the correlations between human valence ratings and predictions of positive moral polarity generated by models trained on each of our corpora. We observe that the correlations are significant, suggesting the ability of our methodology to capture relevant features of moral sentiment from text.
In the remaining applications, we use the diachronic embeddings trained on the Google N-grams corpus, which enabled superior model performance throughout our evaluations.
<<</Alignment with human valence ratings>>>
<<</Model evaluations>>>
<<<Applications to diachronic morality>>>
We applied our framework in three ways: 1) evaluation of selected concepts in historical time courses and prediction of human judgments; 2) automatic detection of moral sentiment change; and 3) broad-scale study of the relations between psycholinguistic variables and historical change of moral sentiment toward concepts.
<<<Moral change in individual concepts>>>
<<<Historical time courses.>>>
We applied our models diachronically to predict time courses of moral relevance, moral polarity, and fine-grained moral categories toward two historically relevant topics: slavery and democracy. By grounding our model in word embeddings for each decade and querying concepts at the three tiers of classification, we obtained the time courses shown in Figure FIGREF21.
We note that these trajectories illustrate actual historical trends. Predictions for democracy show a trend toward morally positive sentiment, consistent with the adoption of democratic regimes in Western societies. On the other hand, predictions for slavery trend down and suggest a drop around the 1860s, coinciding with the American Civil War. We also observe changes in the dominant fine-grained moral categories, such as the perception of democracy as a fair concept, suggesting potential mechanisms behind the polarity changes and providing further insight into the public sentiment toward these concepts as evidenced by text.
<<</Historical time courses.>>>
<<<Prediction of human judgments.>>>
We explored the predictive potential of our framework by comparing model predictions with human judgments of moral relevance and acceptability. We used data from the Pew Research Center's 2013 Global Attitudes survey BIBREF33, in which participants from 40 countries judged 8 topics such as abortion and homosexuality as one of “acceptable", “unacceptable", and “not a moral issue".
We compared human ratings with model predictions at two tiers: for moral relevance, we paired the proportion of “not a moral issue” human responses with irrelevance predictions $p(c_0\,|\,\mathbf {q})$ for each topic, and for moral acceptability, we paired the proportion of “acceptable” responses with positive predictions $p(c_+\,|\,\mathbf {q})$. We used 1990s word embeddings, and obtained predictions for two-word topics by querying the model with their averaged embeddings.
Figure FIGREF23 shows plots of relevance and polarity predictions against survey proportions, and we observe a visible correspondence between model predictions and human judgments despite the difficulty of this task and limited number of topics.
<<</Prediction of human judgments.>>>
<<</Moral change in individual concepts>>>
<<<Retrieval of morally changing concepts>>>
Beyond analyzing selected concepts, we applied our framework predictively on a large repertoire of words to automatically discover the concepts that have exhibited the greatest change in moral sentiment at two tiers, moral relevance and moral polarity.
We selected the 10,000 nouns with highest total frequency in the 1800–1999 period according to data from BIBREF30, restricted to words labelled as nouns in WordNet BIBREF34 for validation. For each such word $\mathbf {q}$, we computed diachronic moral relevance scores $R_i = p(c_1\,|\,\mathbf {q}), i=1,\ldots ,20$ for the 20 decades in our time span. Then, we performed a linear regression of $R$ on $T = 1,\ldots ,n$ and took the fitted slope as a measure of moral relevance change. We repeated the same procedure for moral polarity. Finally, we removed words with average relevance score below $0.5$ to focus on morally relevant retrievals.
Table TABREF17 shows the words with steepest predicted change toward moral relevance, along with their predicted fine-grained moral categories in modern times (i.e., 1900–1999). Table TABREF18 shows the words with steepest predicted change toward the positive and negative moral poles. To further investigate the moral sentiment that may have led to such polarity shifts, we also show the predicted fine-grained moral categories of each word at its earliest time of predicted moral relevance and in modern times. Although we do not have access to ground truth for this application, these results offer initial insight into the historical moral landscape of the English language at scale.
<<</Retrieval of morally changing concepts>>>
<<<Broad-scale investigation of moral change>>>
In this application, we investigated the hypothesis that concept concreteness is inversely related to change in moral relevance, i.e., that concepts considered more abstract might become morally relevant at a higher rate than concepts considered more concrete. To test this hypothesis, we performed a multiple linear regression analysis on rate of change toward moral relevance of a large repertoire of words against concept concreteness ratings, word frequency BIBREF35, and word length BIBREF36.
We obtained norms of concreteness ratings from BIBREF28. We collected the same set of high-frequency nouns as in the previous analysis, along with their fitted slopes of moral relevance change. Since we were interested in moral relevance change within this large set of words, we restricted our analysis to those words whose model predictions indicate change in moral relevance, in either direction, from the 1800s to the 1990s.
We performed a multiple linear regression under the following model:
Here $\rho (w)$ is the slope of moral relevance change for word $w$; $f(w$) is its average frequency; $l(w)$ is its character length; $c(w)$ is its concreteness rating; $\beta _f$, $\beta _l$, $\beta _c$, and $\beta _0$ are the corresponding factor weights and intercept, respectively; and $\epsilon \sim \mathcal {N}(0, \sigma )$ is the regression error term.
Table TABREF27 shows the results of multiple linear regression. We observe that concreteness is a significant negative predictor of change toward moral relevance, suggesting that abstract concepts are more strongly associated with increasing moral relevance over time than concrete concepts. This significance persists under partial correlation test against the control factors ($p < 0.01$).
We further verified the diachronic component of this effect in a random permutation analysis. We generated 1,000 control time courses by randomly shuffling the 20 decades in our data, and repeated the regression analysis to obtain a control distribution for each regression coefficient. All effects became non-significant under the shuffled condition, suggesting the relevance of concept concreteness for diachronic change in moral sentiment (see Supplementary Material).
<<</Broad-scale investigation of moral change>>>
<<</Applications to diachronic morality>>>
<<<Discussion and conclusion>>>
We presented a text-based framework for exploring the socio-scientific problem of moral sentiment change. Our methodology uses minimal parameters and exploits implicit moral biases learned from diachronic word embeddings to reveal the public's moral perception toward a large concept repertoire over a long historical period.
Differing from existing work in NLP that treats moral sentiment as a flat classification problem BIBREF19, BIBREF20, our framework probes moral sentiment change at multiple levels and captures moral dynamics concerning relevance, polarity, and fine-grained categories informed by Moral Foundations Theory BIBREF12. We applied our methodology to the automated analyses of moral change both in individual concepts and at a broad scale, thus providing insights into psycholinguistic variables that associate with rates of moral change in the public.
Our current work focuses on exploring moral sentiment change in English-speaking cultures. Future research should evaluate the appropriateness of the framework to probing moral change from a diverse range of cultures and linguistic backgrounds, and the extent to which moral sentiment change interacts and crisscrosses with linguistic meaning change and lexical coinage. Our work creates opportunities for applying natural language processing toward characterizing moral sentiment change in society.
<<</Discussion and conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nMoral sentiment change and language\nEmerging NLP research on morality\nA three-tier modelling framework\nLexical data for moral sentiment\nModels\nHistorical corpus data\nModel evaluations\nMoral sentiment inference of seed words\nAlignment with human valence ratings\nApplications to diachronic morality\nMoral change in individual concepts\nHistorical time courses.\nPrediction of human judgments.\nRetrieval of morally changing concepts\nBroad-scale investigation of moral change\nDiscussion and conclusion"
],
"type": "outline"
}
|
2001.10161
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Bringing Stories Alive: Generating Interactive Fiction Worlds
<<<Abstract>>>
World building forms the foundation of any task that requires narrative intelligence. In this work, we focus on procedurally generating interactive fiction worlds---text-based worlds that players "see" and "talk to" using natural language. Generating these worlds requires referencing everyday and thematic commonsense priors in addition to being semantically consistent, interesting, and coherent throughout. Using existing story plots as inspiration, we present a method that first extracts a partial knowledge graph encoding basic information regarding world structure such as locations and objects. This knowledge graph is then automatically completed utilizing thematic knowledge and used to guide a neural language generation model that fleshes out the rest of the world. We perform human participant-based evaluations, testing our neural model's ability to extract and fill-in a knowledge graph and to generate language conditioned on it against rule-based and human-made baselines. Our code is available at this https URL.
<<</Abstract>>>
<<<Introduction>>>
Interactive fictions—also called text-adventure games or text-based games—are games in which a player interacts with a virtual world purely through textual natural language—receiving descriptions of what they “see” and writing out how they want to act, an example can be seen in Figure FIGREF2. Interactive fiction games are often structured as puzzles, or quests, set within the confines of given game world. Interactive fictions have been adopted as a test-bed for real-time game playing agents BIBREF0, BIBREF1, BIBREF2. Unlike other, graphical games, interactive fictions test agents' abilities to infer the state of the world through communication and to indirectly affect change in the world through language. Interactive fictions are typically modeled after real or fantasy worlds; commonsense knowledge is an important factor in successfully playing interactive fictions BIBREF3, BIBREF4.
In this paper we explore a different challenge for artificial intelligence: automatically generating text-based virtual worlds for interactive fictions. A core component of many narrative-based tasks—everything from storytelling to game generation—is world building. The world of a story or game defines the boundaries of where the narrative is allowed and what the player is allowed to do. There are four core challenges to world generation: (1) commonsense knowledge: the world must reference priors that the player possesses so that players can make sense of the world and build expectations on how to interact with it. This is especially true in interactive fictions where the world is presented textually because many details of the world necessarily be left out (e.g., the pot is on a stove; kitchens are found in houses) that might otherwise be literal in a graphical virtual world. (2) Thematic knowledge: interactive fictions usually involve a theme or genre that comes with its own expectations. For example, light speed travel is plausible in sci-fi worlds but not realistic in the real world. (3) Coherence: the world must not appear to be an random assortment of locations. (3) Natural language: The descriptions of the rooms as well as the permissible actions must text, implying that the system has natural language generation capability.
Because worlds are conveyed entirely through natural language, the potential output space for possible generated worlds is combinatorially large. To constrain this space and to make it possible to evaluate generated world, we present an approach which makes use of existing stories, building on the worlds presented in them but leaving enough room for the worlds to be unique. Specifically, we take a story such as Sherlock Holmes or Rapunzel—a linear reading experience—and extract the description of the world the story is set in to make an interactive world the player can explore.
Our method first extracts a partial, potentially disconnected knowledge graph from the story, encoding information regarding locations, characters, and objects in the form of $\langle entity,relation,entity\rangle $ triples. Relations between these types of entities as well as their properties are captured in this knowledge graph. However, stories often do not explicitly contain all the information required to fully fill out such a graph. A story may mention that there is a sword stuck in a stone but not what you can do with the sword or where it is in relation to everything else. Our method fills in missing relation and affordance information using thematic knowledge gained from training on stories in a similar genre. This knowledge graph is then used to guide the text description generation process for the various locations, characters, and objects. The game is then assembled on the basis of the knowledge graph and the corresponding generated descriptions.
We have two major contributions. (1) A neural model and a rules-based baseline for each of the tasks described above. The phases are that of graph extraction and completion followed by description generation and game formulation. Each of these phases are relatively distinct and utilize their own models. (2) A human subject study for comparing the neural model and variations on it to the rules-based and human-made approaches. We perform two separate human subject studies—one for the first phase of knowledge graph construction and another for the overall game creation process—testing specifically for coherence, interestingness, and the ability to maintain a theme or genre.
<<</Introduction>>>
<<<Related Work>>>
There has been a slew of recent work in developing agents that can play text games BIBREF0, BIBREF5, BIBREF1, BIBREF6. BIBREF7 ammanabrolutransfer,ammanabrolu,ammanabrolu2020graph in particular use knowledge graphs as state representations for game-playing agents. BIBREF8 propose QAit, a set of question answering tasks framed as text-based or interactive fiction games. QAit focuses on helping agents learn procedural knowledge through interaction with a dynamic environment. These works all focus on agents that learn to play a given set of interactive fiction games as opposed to generating them.
Scheherazade BIBREF9 is a system that learns a plot graph based on stories written by crowd sourcing the task of writing short stories. The learned plot graph contains details relevant to ensure story coherence. It includes: plot events, temporal precedence, and mutual exclusion relations. Scheherazade-IF BIBREF10 extends the system to generate choose-your-own-adventure style interactive fictions in which the player chooses from prescribed options. BIBREF11 explore a method of creating interactive narratives revolving around locations, wherein sentences are mapped to a real-world GPS location from a corpus of sentences belonging to a certain genre. Narratives are made by chaining together sentences selected based on the player's current real-world location. In contrast to these models, our method generates a parser-based interactive fiction in which the player types in a textual command, allowing for greater expressiveness.
BIBREF12 define the problem of procedural content generation in interactive fiction games in terms of the twin considerations of world and quest generation and focus on the latter. They present a system in which quest content is first generated by learning from a corpus and then grounded into a given interactive fiction world. The work is this paper focuses on the world generation problem glossed in the prior work. Thus these two systems can be seen as complimentary.
Light BIBREF13 is a crowdsourced dataset of grounded text-adventure game dialogues. It contains information regarding locations, characters, and objects set in a fantasy world. The authors demonstrate that the supervised training of transformer-based models lets us contextually relevant dialog, actions, and emotes. Most in line with the spirit of this paper, BIBREF14 leverage Light to generate worlds for text-based games. They train a neural network based model using Light to compositionally arrange locations, characters, and objects into an interactive world. Their model is tested using a human subject study against other machine learning based algorithms with respect to the cohesiveness and diversity of generated worlds. Our work, in contrast, focuses on extracting the information necessary for building interactive worlds from existing story plots.
<<</Related Work>>>
<<<World Generation>>>
World generation happens in two phases. In the first phase, a partial knowledge graph is extracted from a story plot and then filled in using thematic commonsense knowledge. In the second phase, the graph is used as the skeleton to generate a full interactive fiction game—generating textual descriptions or “flavortext” for rooms and embedded objects. We present a novel neural approach in addition to a rule guided baseline for each of these phases in this section.
<<<Knowledge Graph Construction>>>
The first phase is to extract a knowledge graph from the story that depicts locations, characters, objects, and the relations between these entities. We present two techniques. The first uses neural question-answering technique to extract relations from a story text. The second, provided as a baseline, uses OpenIE5, a commonly used rule-based information extraction technique. For the sake of simplicity, we considered primarily the location-location and location-character/object relations, represented by the “next to” and “has” edges respectively in Figure FIGREF4.
<<<Neural Graph Construction>>>
While many neural models already exist that perform similar tasks such as named entity extraction and part of speech tagging, they often come at the cost of large amounts of specialized labeled data suited for that task. We instead propose a new method that leverages models trained for context-grounded question-answering tasks to do entity extraction with no task dependent data or fine-tuning necessary. Our method, dubbed AskBERT, leverages the Question-Answering (QA) model ALBERT BIBREF15. AskBERT consists of two main steps as shown in Figure FIGREF7: vertex extraction and graph construction.
The first step is to extract the set of entities—graph vertices—from the story. We are looking to extract information specifically regarding characters, locations, and objects. This is done by using asking the QA model questions such as “Who is a character in the story?”. BIBREF16 have shown that the phrasing of questions given to a QA model is important and this forms the basis of how we formulate our questions—questions are asked so that they are more likely to return a single answer, e.g. asking “Where is a location in the story?” as opposed to “Where are the locations in the story?”. In particular, we notice that pronoun choice can be crucial; “Where is a location in the story?” yielded more consistent extraction than “What is a location in the story?”. ALBERT QA is trained to also output a special <$no$-$answer$> token when it cannot find an answer to the question within the story. Our method makes use of this by iteratively asking QA model a question and masking out the most likely answer outputted on the previous step. This process continues until the <$no$-$answer$> token becomes the most likely answer.
The next step is graph construction. Typical interactive fiction worlds are usually structured as trees, i.e. no cycles except between locations. Using this fact, we use an approach that builds a graph from the vertex set by one relation—or edge—at a time. Once again using the entire story plot as context, we query the ALBERT-QA model picking a random starting location $x$ from the set of vertices previously extracted.and asking the questions “What location can I visit from $x$?” and “Who/What is in $x$?”. The methodology for phrasing these questions follows that described for the vertex extraction. The answer given by the QA model is matched to the vertex set by picking the vertex $u$ that contains the best word-token overlap with the answer. Relations between vertices are added by computing a relation probability on the basis of the output probabilities of the answer given by the QA model. The probability that vertices $x,u$ are related:
where
is the sum of the individual token probabilities of all the overlapping tokens in the answer from the QA model and $u$.
<<</Neural Graph Construction>>>
<<<Rule-Based Graph Construction>>>
We compared our proposed AskBERT method with a non-neural, rule-based approach. This approach is based on the information extracted by OpenIE5, followed by some post-processing such as named-entity recognition and part-of-speech tagging. OpenIE5 combines several cutting-edge ideas from several existing papers BIBREF17, BIBREF18, BIBREF19 to create a powerful information extraction tools. For a given sentence, OpenIE5 generates multiple triples in the format of $\langle entity, relation, entity\rangle $ as concise representations of the sentence, each with a confidence score. These triples are also occasionally annotated with location information indicating that a triple happened in a location.
As in the neural AskBERT model, we attempt to extract information regarding locations, characters, and objects. The entire story plot is passed into the OpenIE5 and we receive a set of triples. The location annotations on the triples are used to create a set of locations. We mark which sentences in the story contain these locations. POS tagging based on marking noun-phrases is then used in conjunction with NER to further filter the set of triples—identifying the set of characters and objects in the story.
The graph is constructed by linking the set of triples on the basis of the location they belong to. While some sentences contain very explicit location information for OpenIE5 to mark it out in the triples, most of them do not. We therefore make the assumption that the location remains the same for all triples extracted in between sentences where locations are explicitly mentioned. For example, if there exists $location A$ in the 1st sentence and $location B$ in the 5th sentence of the story, all the events described in sentences 1-4 are considered to take place in $location A$. The entities mentioned in these events are connected to $location A$ in the graph.
<<</Rule-Based Graph Construction>>>
<<</Knowledge Graph Construction>>>
<<<Description Generation>>>
The second phase involves using the constructed knowledge graph to generate textual descriptions of the entities we have extracted, also known as flavortext. This involves generating descriptions of what a player “sees” when they enter a location and short blurbs for each object and character. These descriptions need to not only be faithful to the information present in the knowledge graph and the overall story plot but to also contain flavor and be interesting for the player.
<<<Neural Description Generation>>>
Here, we approach the problem of description generation by taking inspiration from conditional transformer-based generation methods BIBREF20. Our approach is outlined in Figure FIGREF11 and an example description shown in Figure FIGREF2. For any given entity in the story, we first locate it in the story plot and then construct a prompt which consists of the entire story up to and including the sentence when the entity is first mentioned in the story followed by a question asking to describe that entity. With respect to prompts, we found that more direct methods such as question-answering were more consistent than open-ended sentence completion. For example, “Q: Who is the prince? A:” often produced descriptions that were more faithful to the information already present about the prince in the story than “You see the prince. He is/looks”. For our transformer-based generation, we use a pre-trained 355M GPT-2 model BIBREF21 finetuned on a corpus of plot summaries collected from Wikipedia. The plots used for finetuning are tailored specific to the genre of the story in order to provide more relevant generation for the target genre. Additional details regarding the datasets used are provided in Section SECREF4. This method strikes a balance between knowledge graph verbalization techniques which often lack “flavor” and open ended generation which struggles to maintain semantic coherence.
<<</Neural Description Generation>>>
<<<Rules-Based Description Generation>>>
In the rule-based approach, we utilized the templates from the built-in text game generator of TextWorld BIBREF1 to generate the description for our graphs. TextWorld is an open-source library that provides a way to generate text-game learning environments for training reinforcement learning agents using pre-built grammars.
Two major templates involved here are the Room Intro Templates and Container Description Templates from TextWorld, responsible for generating descriptions of locations and blurbs for objects/characters respectively. The location and object/character information are taken from the knowledge graph constructed previously.
Example of Room Intro Templates: “This might come as a shock to you, but you've just $\#entered\#$ a <$location$-$name$>”
Example of Container Description Templates: “The <$location$-$name$> $\#contains\#$ <$object/person$-$name$>”
Each token surrounded by $\#$ sign can be expanded using a select set of terminal tokens. For instance, $\#entered\#$ could be filled with any of the following phrases here: entered; walked into; fallen into; moved into; stumbled into; come into. Additional prefixes, suffixes and adjectives were added to increase the relative variety of descriptions. Unlike the neural methods, the rule-based approach is not able to generate detailed and flavorful descriptions of the properties of the locations/objects/characters. By virtue of the templates, however, it is much better at maintaining consistency with the information contained in the knowledge graph.
<<</Rules-Based Description Generation>>>
<<</Description Generation>>>
<<</World Generation>>>
<<<Evaluation>>>
We conducted two sets of human participant evaluations by recruiting participants over Amazon Mechanical Turk. The first evaluation tests the knowledge graph construction phase, in which we measure perceived coherence and genre or theme resemblance of graphs extracted by different models. The second study compares full games—including description generation and game assembly, which can't easily be isolated from graph construction—generated by different methods. This study looks at how interesting the games were to the players in addition to overall coherence and genre resemblance. Both studies are performed across two genres: mystery and fairy-tales. This is done in part to test the relative effectiveness of our approach across different genres with varying thematic commonsense knowledge. The dataset used was compiled via story summaries that were scraped from Wikipedia via a recursive crawling bot. The bot searched pages for both for plot sections as well as links to other potential stories. From the process, 695 fairy-tales and 536 mystery stories were compiled from two categories: novels and short stories. We note that the mysteries did not often contain many fantasy elements, i.e. they consisted of mysteries set in our world such as Sherlock Holmes, while the fairy-tales were much more removed from reality. Details regarding how each of the studies were conducted and the corresponding setup are presented below.
<<<Knowledge Graph Construction Evaluation>>>
We first select a subset of 10 stories randomly from each genre and then extract a knowledge graph using three different models. Each participant is presented with the three graphs extracted from a single story in each genre and then asked to rank them on the basis of how coherent they were and how well the graphs match the genre. The graphs resembles the one shown in in Figure FIGREF4 and are presented to the participant sequentially. The exact order of the graphs and genres was also randomized to mitigate any potential latent correlations. Overall, this study had a total of 130 participants.This ensures that, on average, graphs from every story were seen by 13 participants.
In addition to the neural AskBERT and rules-based methods, we also test a variation of the neural model which we dub to be the “random” approach. The method of vertex extraction remains identical to the neural method, but we instead connect the vertices randomly instead of selecting the most confident according to the QA model. We initialize the graph with a starting location entity. Then, we randomly sample from the vertex set and connect it to a randomly sampled location in the graph until every vertex has been connected. This ablation in particular is designed to test the ability of our neural model to predict relations between entities. It lets us observe how accurately linking related vertices effects each of the metrics that we test for. For a fair comparison between the graphs produced by different approaches, we randomly removed some of the nodes and edges from the initial graphs so that the maximum number of locations per graph and the maximum number of objects/people per location in each story genre are the same.
The results are shown in Table TABREF20. We show the median rank of each of the models for both questions across the genres. Ranked data is generally closely interrelated and so we perform Friedman's test between the three models to validate that the results are statistically significant. This is presented as the $p$-value in table (asterisks indicate significance at $p<0.05$). In cases where we make comparisons between specific pairs of models, when necessary, we additionally perform the Mann-Whitney U test to ensure that the rankings differed significantly.
In the mystery genre, the rules-based method was often ranked first in terms of genre resemblance, followed by the neural and random models. This particular result was not statistically significant however, likely indicating that all the models performed approximately equally in this category. The neural approach was deemed to be the most coherent followed by the rules and random. For the fairy-tales, the neural model ranked higher on both of the questions asked of the participants. In this genre, the random neural model also performed better than the rules based approach.
Tables TABREF18 and TABREF19 show the statistics of the constructed knowledge graphs in terms of vertices and edges. We see that the rules-based graph construction has a lower number of locations, characters, and relations between entities but far more objects in general. The greater number of objects is likely due to the rules-based approach being unable to correctly identify locations and characters. The gap between the methods is less pronounced in the mystery genre as opposed to the fairy-tales, in fact the rules-based graphs have more relations than the neural ones. The random and neural models have the same number of entities in all categories by construction but random in general has lower variance on the number of relations found. In this case as well, the variance is lower for mystery as opposed to fairy-tales. When taken in the context of the results in Table TABREF20, it appears to indicate that leveraging thematic commonsense in the form of AskBERT for graph construction directly results in graphs that are more coherent and maintain genre more easily. This is especially true in the case of the fairy-tales where the thematic and everyday commonsense diverge more than than in the case of the mysteries.
<<</Knowledge Graph Construction Evaluation>>>
<<<Full Game Evaluation>>>
This participant study was designed to test the overall game formulation process encompassing both phases described in Section SECREF3. A single story from each genre was chosen by hand from the 10 stories used for the graph evaluation process. From the knowledge graphs for this story, we generate descriptions using the neural, rules, and random approaches described previously. Additionally, we introduce a human-authored game for each story here to provide an additional benchmark. This author selected was familiar with text-adventure games in general as well as the genres of detective mystery and fairy tale. To ensure a fair comparison, we ensure that the maximum number of locations and maximum number of characters/objects per location matched the other methods. After setting general format expectations, the author read the selected stories and constructed knowledge graphs in a corresponding three step process of: identifying the $n$ most important entities in the story, mapping positional relationships between entities, and then synthesizing flavor text for the entities based off of said location, the overall story plot, and background topic knowledge.
Once the knowledge graph and associated descriptions are generated for a particular story, they are then automatically turned into a fully playable text-game using the text game engine Evennia. Evennia was chosen for its flexibility and customization, as well as a convenient web client for end user testing. The data structures were translated into builder commands within Evennia that constructed the various layouts, flavor text, and rules of the game world. Users were placed in one “room” out of the different world locations within the game they were playing, and asked to explore the game world that was available to them. Users achieved this by moving between rooms and investigating objects. Each time a new room was entered or object investigated, the player's total number of explored entities would be displayed as their score.
Each participant was was asked to play the neural game and then another one from one of the three additional models within a genre. The completion criteria for each game is collect half the total score possible in the game, i.e. explore half of all possible rooms and examine half of all possible entities. This provided the participant with multiple possible methods of finishing a particular game. On completion, the participant was asked to rank the two games according to overall perceived coherence, interestingness, and adherence to the genre. We additionally provided a required initial tutorial game which demonstrated all of these mechanics. The order in which participants played the games was also randomized as in the graph evaluation to remove potential correlations. We had 75 participants in total, 39 for mystery and 36 for fairy-tales. As each player played the neural model created game and one from each of the other approaches—this gave us 13 on average for the other approaches in the mystery genre and 12 for fairy-tales.
The summary of the results of the full game study is shown in Table TABREF23. As the comparisons made in this study are all made pairwise between our neural model and one of the baselines—they are presented in terms of what percentage of participants prefer the baseline game over the neural game. Once again, as this is highly interrelated ranked data, we perform the Mann-Whitney U test between each of the pairs to ensure that the rankings differed significantly. This is also indicated on the table.
In the mystery genre, the neural approach is generally preferred by a greater percentage of participants than the rules or random. The human-made game outperforms them all. A significant exception to is that participants thought that the rules-based game was more interesting than the neural game. The trends in the fairy-tale genre are in general similar with a few notable deviations. The first deviation is that the rules-based and random approaches perform significantly worse than neural in this genre. We see also that the neural game is as coherent as the human-made game.
As in the previous study, we hypothesize that this is likely due to the rules-based approach being more suited to the mystery genre, which is often more mundane and contains less fantastical elements. By extension, we can say that thematic commonsense in fairy-tales has less overlap with everyday commonsense than for mundane mysteries. This has a few implications, one of which is that this theme specific information is unlikely to have been seen by OpenIE5 before. This is indicated in the relatively improved performance of the rules-based model in this genre across in terms of both interestingness and coherence.The genre difference can also be observed in terms of the performance of the random model. This model also lacking when compared to our neural model across all the questions asked especially in the fairy-tale setting. This appears to imply that filling in gaps in the knowledge graph using thematically relevant information such as with AskBERT results in more interesting and coherent descriptions and games especially in settings where the thematic commonsense diverges from everyday commonsense.
<<</Full Game Evaluation>>>
<<</Evaluation>>>
<<<Conclusion>>>
Procedural world generation systems are required to be semantically consistent, comply with thematic and everyday commonsense understanding, and maintain overall interestingness. We describe an approach that transform a linear reading experience in the form of a story plot into a interactive narrative experience. Our method, AskBERT, extracts and fills in a knowledge graph using thematic commonsense and then uses it as a skeleton to flesh out the rest of the world. A key insight from our human participant study reveals that the ability to construct a thematically consistent knowledge graph is critical to overall perceptions of coherence and interestingness particularly when the theme diverges from everyday commonsense understanding.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nWorld Generation\nKnowledge Graph Construction\nNeural Graph Construction\nRule-Based Graph Construction\nDescription Generation\nNeural Description Generation\nRules-Based Description Generation\nEvaluation\nKnowledge Graph Construction Evaluation\nFull Game Evaluation\nConclusion"
],
"type": "outline"
}
|
1909.00279
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Generating Classical Chinese Poems from Vernacular Chinese
<<<Abstract>>>
Classical Chinese poetry is a jewel in the treasure house of Chinese culture. Previous poem generation models only allow users to employ keywords to interfere the meaning of generated poems, leaving the dominion of generation to the model. In this paper, we propose a novel task of generating classical Chinese poems from vernacular, which allows users to have more control over the semantic of generated poems. We adapt the approach of unsupervised machine translation (UMT) to our task. We use segmentation-based padding and reinforcement learning to address under-translation and over-translation respectively. According to experiments, our approach significantly improve the perplexity and BLEU compared with typical UMT models. Furthermore, we explored guidelines on how to write the input vernacular to generate better poems. Human evaluation showed our approach can generate high-quality poems which are comparable to amateur poems.
<<</Abstract>>>
<<<Introduction>>>
During thousands of years, millions of classical Chinese poems have been written. They contain ancient poets' emotions such as their appreciation for nature, desiring for freedom and concerns for their countries. Among various types of classical poetry, quatrain poems stand out. On the one hand, their aestheticism and terseness exhibit unique elegance. On the other hand, composing such poems is extremely challenging due to their phonological, tonal and structural restrictions.
Most previous models for generating classical Chinese poems BIBREF0, BIBREF1 are based on limited keywords or characters at fixed positions (e.g., acrostic poems). Since users could only interfere with the semantic of generated poems using a few input words, models control the procedure of poem generation. In this paper, we proposed a novel model for classical Chinese poem generation. As illustrated in Figure FIGREF1, our model generates a classical Chinese poem based on a vernacular Chinese paragraph. Our objective is not only to make the model generate aesthetic and terse poems, but also keep rich semantic of the original vernacular paragraph. Therefore, our model gives users more control power over the semantic of generated poems by carefully writing the vernacular paragraph.
Although a great number of classical poems and vernacular paragraphs are easily available, there exist only limited human-annotated pairs of poems and their corresponding vernacular translations. Thus, it is unlikely to train such poem generation model using supervised approaches. Inspired by unsupervised machine translation (UMT) BIBREF2, we treated our task as a translation problem, namely translating vernacular paragraphs to classical poems.
However, our work is not just a straight-forward application of UMT. In a training example for UMT, the length difference of source and target languages are usually not large, but this is not true in our task. Classical poems tend to be more concise and abstract, while vernacular text tends to be detailed and lengthy. Based on our observation on gold-standard annotations, vernacular paragraphs usually contain more than twice as many Chinese characters as their corresponding classical poems. Therefore, such discrepancy leads to two main problems during our preliminary experiments: (1) Under-translation: when summarizing vernacular paragraphs to poems, some vernacular sentences are not translated and ignored by our model. Take the last two vernacular sentences in Figure FIGREF1 as examples, they are not covered in the generated poem. (2) Over-translation: when expanding poems to vernacular paragraphs, certain words are unnecessarily translated for multiple times. For example, the last sentence in the generated poem of Figure FIGREF1, as green as sapphire, is back-translated as as green as as as sapphire.
Inspired by the phrase segmentation schema in classical poems BIBREF3, we proposed the method of phrase-segmentation-based padding to handle with under-translation. By padding poems based on the phrase segmentation custom of classical poems, our model better aligns poems with their corresponding vernacular paragraphs and meanwhile lowers the risk of under-translation. Inspired by Paulus2018ADR, we designed a reinforcement learning policy to penalize the model if it generates vernacular paragraphs with too many repeated words. Experiments show our method can effectively decrease the possibility of over-translation.
The contributions of our work are threefold:
(1) We proposed a novel task for unsupervised Chinese poem generation from vernacular text.
(2) We proposed using phrase-segmentation-based padding and reinforcement learning to address two important problems in this task, namely under-translation and over-translation.
(3) Through extensive experiments, we proved the effectiveness of our models and explored how to write the input vernacular to inspire better poems. Human evaluation shows our models are able to generate high quality poems, which are comparable to amateur poems.
<<</Introduction>>>
<<<Related Works>>>
Classical Chinese Poem Generation Most previous works in classical Chinese poem generation focus on improving the semantic coherence of generated poems. Based on LSTM, Zhang and Lapata Zhang2014ChinesePG purposed generating poem lines incrementally by taking into account the history of what has been generated so far. Yan Yan2016iPA proposed a polishing generation schema, each poem line is generated incrementally and iteratively by refining each line one-by-one. Wang et al. Wang2016ChinesePG and Yi et al. Yi2018ChinesePG proposed models to keep the generated poems coherent and semantically consistent with the user's intent. There are also researches that focus on other aspects of poem generation. (Yang et al. Yang2018StylisticCP explored increasing the diversity of generated poems using an unsupervised approach. Xu et al. Xu2018HowII explored generating Chinese poems from images. While most previous works generate poems based on topic words, our work targets at a novel task: generating poems from vernacular Chinese paragraphs.
Unsupervised Machine Translation Compared with supervised machine translation approaches BIBREF4, BIBREF5, unsupervised machine translation BIBREF6, BIBREF2 does not rely on human-labeled parallel corpora for training. This technique is proved to greatly improve the performance of low-resource languages translation systems. (e.g. English-Urdu translation). The unsupervised machine translation framework is also applied to various other tasks, e.g. image captioning BIBREF7, text style transfer BIBREF8, speech to text translation BIBREF9 and clinical text simplification BIBREF10. The UMT framework makes it possible to apply neural models to tasks where limited human labeled data is available. However, in previous tasks that adopt the UMT framework, the abstraction levels of source and target language are the same. This is not the case for our task.
Under-Translation & Over-Translation Both are troublesome problems for neural sequence-to-sequence models. Most previous related researches adopt the coverage mechanism BIBREF11, BIBREF12, BIBREF13. However, as far as we know, there were no successful attempt applying coverage mechanism to transformer-based models BIBREF14.
<<</Related Works>>>
<<<Model>>>
<<<Main Architecture>>>
We transform our poem generation task as an unsupervised machine translation problem. As illustrated in Figure FIGREF1, based on the recently proposed UMT framework BIBREF2, our model is composed of the following components:
Encoder $\textbf {E}_s$ and decoder $\textbf {D}_s$ for vernacular paragraph processing
Encoder $\textbf {E}_t$ and decoder $\textbf {D}_t$ for classical poem processing
where $\textbf {E}_s$ (or $\textbf {E}_t$) takes in a vernacular paragraph (or a classical poem) and converts it into a hidden representation, and $\textbf {D}_s$ (or $\textbf {D}_t$) takes in the hidden representation and converts it into a vernacular paragraph (or a poem). Our model relies on a vernacular texts corpus $\textbf {\emph {S}}$ and a poem corpus $\textbf {\emph {T}}$. We denote $S$ and $T$ as instances in $\textbf {\emph {S}}$ and $\textbf {\emph {T}}$ respectively.
The training of our model relies on three procedures, namely parameter initialization, language modeling and back-translation. We will give detailed introduction to each procedure.
Parameter initialization As both vernacular and classical poem use Chinese characters, we initialize the character embedding of both languages in one common space, the same character in two languages shares the same embedding. This initialization helps associate characters with their plausible translations in the other language.
Language modeling It helps the model generate texts that conform to a certain language. A well-trained language model is able to detect and correct minor lexical and syntactic errors. We train the language models for both vernacular and classical poem by minimizing the following loss:
where $S_N$ (or $T_N$) is generated by adding noise (drop, swap or blank a few words) in $S$ (or $T$).
Back-translation Based on a vernacular paragraph $S$, we generate a poem $T_S$ using $\textbf {E}_s$ and $\textbf {D}_t$, we then translate $T_S$ back into a vernacular paragraph $S_{T_S} = \textbf {D}_s(\textbf {E}_t(T_S))$. Here, $S$ could be used as gold standard for the back-translated paragraph $S_{T_s}$. In this way, we could turn the unsupervised translation into a supervised task by maximizing the similarity between $S$ and $S_{T_S}$. The same also applies to using poem $T$ as gold standard for its corresponding back-translation $T_{S_T}$. We define the following loss:
Note that $\mathcal {L}^{bt}$ does not back propagate through the generation of $T_S$ and $S_T$ as we observe no improvement in doing so. When training the model, we minimize the composite loss:
where $\alpha _1$ and $\alpha _2$ are scaling factors.
<<</Main Architecture>>>
<<<Addressing Under-Translation and Over-Translation>>>
During our early experiments, we realize that the naive UMT framework is not readily applied to our task. Classical Chinese poems are featured for its terseness and abstractness. They usually focus on depicting broad poetic images rather than details. We collected a dataset of classical Chinese poems and their corresponding vernacular translations, the average length of the poems is $32.0$ characters, while for vernacular translations, it is $73.3$. The huge gap in sequence length between source and target language would induce over-translation and under-translation when training UMT models. In the following sections, we explain the two problems and introduce our improvements.
<<<Under-Translation>>>
By nature, classical poems are more concise and abstract while vernaculars are more detailed and lengthy, to express the same meaning, a vernacular paragraph usually contains more characters than a classical poem. As a result, when summarizing a vernacular paragraph $S$ to a poem $T_S$, $T_S$ may not cover all information in $S$ due to its length limit. In real practice, we notice the generated poems usually only cover the information in the front part of the vernacular paragraph, while the latter part is unmentioned.
To alleviate under-translation, we propose phrase segmentation-based padding. Specifically, we first segment each line in a classical poem into several sub-sequences, we then join these sub-sequences with the special padding tokens <p>. During training, the padded lines are used instead of the original poem lines. As illustrated in Figure FIGREF10, padding would create better alignments between a vernacular paragraph and a prolonged poem, making it more likely for the latter part of the vernacular paragraph to be covered in the poem. As we mentioned before, the length of the vernacular translation is about twice the length of its corresponding classical poem, so we pad each segmented line to twice its original length.
According to Ye jia:1984, to present a stronger sense of rhythm, each type of poem has its unique phrase segmentation schema, for example, most seven-character quatrain poems adopt the 2-2-3 schema, i.e. each quatrain line contains 3 phrases, the first, second and third phrase contains 2, 2, 3 characters respectively. Inspired by this law, we segment lines in a poem according to the corresponding phrase segmentation schema. In this way, we could avoid characters within the scope of a phrase to be cut apart, thus best preserve the semantic of each phrase.BIBREF15
<<</Under-Translation>>>
<<<Over-Translation>>>
In NMT, when decoding is complete, the decoder would generate an <EOS>token, indicating it has reached the end of the output sequence. However, when expending a poem $T$ into a vernacular Chinese paragraph $S_T$, due to the conciseness nature of poems, after finishing translating every source character in $T$, the output sequence $S_T$ may still be much shorter than the expected length of a poem‘s vernacular translation. As a result, the decoder would believe it has not finished decoding. Instead of generating the <EOS>token, the decoder would continue to generate new output characters from previously translated source characters. This would cause the decoder to repetitively output a piece of text many times.
To remedy this issue, in addition to minimizing the original loss function $\mathcal {L}$, we propose to minimize a specific discrete metric, which is made possible with reinforcement learning.
We define repetition ratio $RR(S)$ of a paragraph $S$ as:
where $vocab(S)$ refers to the number of distinctive characters in $S$, $len(S)$ refers the number of all characters in $S$. Obviously, if a generated sequence contains many repeated characters, it would have high repetition ratio. Following the self-critical policy gradient training BIBREF16, we define the following loss function:
where $\tau $ is a manually set threshold. Intuitively, minimizing $\mathcal {L}^{rl}$ is equivalent to maximizing the conditional likelihood of the sequence $S$ given $S_{T_S}$ if its repetition ratio is lower than the threshold $\tau $. Following BIBREF17, we revise the composite loss as:
where $\alpha _1, \alpha _2, \alpha _3$ are scaling factors.
<<</Over-Translation>>>
<<</Addressing Under-Translation and Over-Translation>>>
<<</Model>>>
<<<Experiment>>>
The objectives of our experiment are to explore the following questions: (1) How much do our models improve the generated poems? (Section SECREF23) (2) What are characteristics of the input vernacular paragraph that lead to a good generated poem? (Section SECREF26) (3) What are weaknesses of generated poems compared to human poems? (Section SECREF27) To this end, we built a dataset as described in Section SECREF18. Evaluation metrics and baselines are described in Section SECREF21 and SECREF22. For the implementation details of building the dataset and models, please refer to supplementary materials.
<<<Datasets>>>
Training and Validation Sets We collected a corpus of poems and a corpus of vernacular literature from online resources. The poem corpus contains 163K quatrain poems from Tang Poems and Song Poems, the vernacular literature corpus contains 337K short paragraphs from 281 famous books, the corpus covers various literary forms including prose, fiction and essay. Note that our poem corpus and a vernacular corpus are not aligned. We further split the two corpora into a training set and a validation set.
Test Set From online resources, we collected 487 seven-character quatrain poems from Tang Poems and Song Poems, as well as their corresponding high quality vernacular translations. These poems could be used as gold standards for poems generated from their corresponding vernacular translations. Table TABREF11 shows the statistics of our training, validation and test set.
<<</Datasets>>>
<<<Evaluation Metrics>>>
Perplexity Perplexity reflects the probability a model generates a certain poem. Intuitively, a better model would yield higher probability (lower perplexity) on the gold poem.
BLEU As a standard evaluation metric for machine translation, BLEU BIBREF18 measures the intersection of n-grams between the generated poem and the gold poem. A better generated poem usually achieves higher BLEU score, as it shares more n-gram with the gold poem.
Human evaluation While perplexity and BLEU are objective metrics that could be applied to large-volume test set, evaluating Chinese poems is after all a subjective task. We invited 30 human evaluators to join our human evaluation. The human evaluators were divided into two groups. The expert group contains 15 people who hold a bachelor degree in Chinese literature, and the amateur group contains 15 people who holds a bachelor degree in other fields. All 30 human evaluators are native Chinese speakers.
We ask evaluators to grade each generated poem from four perspectives: 1) Fluency: Is the generated poem grammatically and rhythmically well formed, 2) Semantic coherence: Is the generated poem itself semantic coherent and meaningful, 3) Semantic preservability: Does the generated poem preserve the semantic of the modern Chinese translation, 4) Poeticness: Does the generated poem display the characteristic of a poem and does the poem build good poetic image. The grading scale for each perspective is from 1 to 5.
<<</Evaluation Metrics>>>
<<<Baselines>>>
We compare the performance of the following models: (1) LSTM BIBREF19; (2)Naive transformer BIBREF14; (3)Transformer + Anti OT (RL loss); (4)Transformer + Anti UT (phrase segmentation-based padding); (5)Transformer + Anti OT&UT.
<<</Baselines>>>
<<<Reborn Poems: Generating Poems from Vernacular Translations>>>
As illustrated in Table TABREF12 (ID 1). Given the vernacular translation of each gold poem in test set, we generate five poems using our models. Intuitively, the more the generated poem resembles the gold poem, the better the model is. We report mean perplexity and BLEU scores in Table TABREF19 (Where +Anti OT refers to adding the reinforcement loss to mitigate over-fitting and +Anti UT refers to adding phrase segmentation-based padding to mitigate under-translation), human evaluation results in Table TABREF20.
According to experiment results, perplexity, BLEU scores and total scores in human evaluation are consistent with each other. We observe all BLEU scores are fairly low, we believe it is reasonable as there could be multiple ways to compose a poem given a vernacular paragraph. Among transformer-based models, both +Anti OT and +Anti UT outperforms the naive transformer, while Anti OT&UT shows the best performance, this demonstrates alleviating under-translation and over-translation both helps generate better poems. Specifically, +Anti UT shows bigger improvement than +Anti OT. According to human evaluation, among the four perspectives, our Anti OT&UT brought most score improvement in Semantic preservability, this proves our improvement on semantic preservability was most obvious to human evaluators. All transformer-based models outperform LSTM. Note that the average length of the vernacular translation is over 70 characters, comparing with transformer-based models, LSTM may only keep the information in the beginning and end of the vernacular. We anticipated some score inconsistency between expert group and amateur group. However, after analyzing human evaluation results, we did not observed big divergence between two groups.
<<</Reborn Poems: Generating Poems from Vernacular Translations>>>
<<<Interpoetry: Generating Poems from Various Literature Forms>>>
Chinese literature is not only featured for classical poems, but also various other literature forms. Song lyricUTF8gbsn(宋词), or ci also gained tremendous popularity in its palmy days, standing out in classical Chinese literature. Modern prose, modern poems and pop song lyrics have won extensive praise among Chinese people in modern days. The goal of this experiment is to transfer texts of other literature forms into quatrain poems. We expect the generated poems to not only keep the semantic of the original text, but also demonstrate terseness, rhythm and other characteristics of ancient poems. Specifically, we chose 20 famous fragments from four types of Chinese literature (5 fragments for each of modern prose, modern poems, pop song lyrics and Song lyrics). We try to As no ground truth is available, we resorted to human evaluation with the same grading standard in Section SECREF23.
Comparing the scores of different literature forms, we observe Song lyric achieves higher scores than the other three forms of modern literature. It is not surprising as both Song lyric and quatrain poems are written in classical Chinese, while the other three literature forms are all in vernacular.
Comparing the scores within the same literature form, we observe the scores of poems generated from different paragraphs tends to vary. After carefully studying the generated poems as well as their scores, we have the following observation:
1) In classical Chinese poems, poetic images UTF8gbsn(意象) were widely used to express emotions and to build artistic conception. A certain poetic image usually has some fixed implications. For example, autumn is usually used to imply sadness and loneliness. However, with the change of time, poetic images and their implications have also changed. According to our observation, if a vernacular paragraph contains more poetic images used in classical literature, its generated poem usually achieves higher score. As illustrated in Table TABREF12, both paragraph 2 and 3 are generated from pop song lyrics, paragraph 2 uses many poetic images from classical literature (e.g. pear flowers, makeup), while paragraph 3 uses modern poetic images (e.g. sparrows on the utility pole). Obviously, compared with poem 2, sentences in poem 3 seems more confusing, as the poetic images in modern times may not fit well into the language model of classical poems.
2) We also observed that poems generated from descriptive paragraphs achieve higher scores than from logical or philosophical paragraphs. For example, in Table TABREF12, both paragraph 4 (more descriptive) and paragraph 5 (more philosophical) were selected from famous modern prose. However, compared with poem 4, poem 5 seems semantically more confusing. We offer two explanations to the above phenomenon: i. Limited by the 28-character restriction, it is hard for quatrain poems to cover complex logical or philosophical explanation. ii. As vernacular paragraphs are more detailed and lengthy, some information in a vernacular paragraph may be lost when it is summarized into a classical poem. While losing some information may not change the general meaning of a descriptive paragraph, it could make a big difference in a logical or philosophical paragraph.
<<</Interpoetry: Generating Poems from Various Literature Forms>>>
<<<Human Discrimination Test>>>
We manually select 25 generated poems from vernacular Chinese translations and pair each one with its corresponding human written poem. We then present the 25 pairs to human evaluators and ask them to differentiate which poem is generated by human poet.
As demonstrated in Table TABREF29, although the general meanings in human poems and generated poems seem to be the same, the wordings they employ are quite different. This explains the low BLEU scores in Section 4.3. According to the test results in Table TABREF30, human evaluators only achieved 65.8% in mean accuracy. This indicates the best generated poems are somewhat comparable to poems written by amateur poets.
We interviewed evaluators who achieved higher than 80% accuracy on their differentiation strategies. Most interviewed evaluators state they realize the sentences in a human written poem are usually well organized to highlight a theme or to build a poetic image, while the correlation between sentences in a generated poem does not seem strong. As demonstrated in Table TABREF29, the last two sentences in both human poems (marked as red) echo each other well, while the sentences in machine-generated poems seem more independent. This gives us hints on the weakness of generated poems: While neural models may generate poems that resemble human poems lexically and syntactically, it's still hard for them to compete with human beings in building up good structures.
<<</Human Discrimination Test>>>
<<</Experiment>>>
<<<Discussion>>>
Addressing Under-Translation In this part, we wish to explore the effect of different phrase segmentation schemas on our phrase segmentation-based padding. According to Ye jia:1984, most seven-character quatrain poems adopt the 2-2-3 segmentation schema. As shown in examples in Figure FIGREF31, we compare our phrase segmentation-based padding (2-2-3 schema) to two less common schemas (i.e. 2-3-2 and 3-2-2 schema) we report our experiment results in Table TABREF32.
The results show our 2-2-3 segmentation-schema greatly outperforms 2-3-2 and 3-2-2 schema in both perplexity and BLEU scores. Note that the BLEU scores of 2-3-2 and 3-2-2 schema remains almost the same as our naive baseline (Without padding). According to the observation, we have the following conclusions: 1) Although padding better aligns the vernacular paragraph to the poem, it may not improve the quality of the generated poem. 2) The padding tokens should be placed according to the phrase segmentation schema of the poem as it preserves the semantic within the scope of each phrase.
Addressing Over-Translation To explore the effect of our reinforcement learning policy on alleviating over-translation, we calculate the repetition ratio of vernacular paragraphs generated from classical poems in our validation set. We found naive transformer achieves $40.8\%$ in repetition ratio, while our +Anti OT achieves $34.9\%$. Given the repetition ratio of vernacular paragraphs (written by human beings) in our validation set is $30.1\%$, the experiment results demonstrated our RL loss effectively alleviate over-translation, which in turn leads to better generated poems.
<<</Discussion>>>
<<<Conclusion>>>
In this paper, we proposed a novel task of generating classical Chinese poems from vernacular paragraphs. We adapted the unsupervised machine translation model to our task and meanwhile proposed two novel approaches to address the under-translation and over-translation problems. Experiments show that our task can give users more controllability in generating poems. In addition, our approaches are very effective to solve the problems when the UMT model is directly used in this task. In the future, we plan to explore: (1) Applying the UMT model in the tasks where the abstraction levels of source and target languages are different (e.g., unsupervised automatic summarization); (2) Improving the quality of generated poems via better structure organization approaches.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Works\nModel\nMain Architecture\nAddressing Under-Translation and Over-Translation\nUnder-Translation\nOver-Translation\nExperiment\nDatasets\nEvaluation Metrics\nBaselines\nReborn Poems: Generating Poems from Vernacular Translations\nInterpoetry: Generating Poems from Various Literature Forms\nHuman Discrimination Test\nDiscussion\nConclusion"
],
"type": "outline"
}
|
1909.06762
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Entity-Consistent End-to-end Task-Oriented Dialogue System with KB Retriever
<<<Abstract>>>
Querying the knowledge base (KB) has long been a challenge in the end-to-end task-oriented dialogue system. Previous sequence-to-sequence (Seq2Seq) dialogue generation work treats the KB query as an attention over the entire KB, without the guarantee that the generated entities are consistent with each other. In this paper, we propose a novel framework which queries the KB in two steps to improve the consistency of generated entities. In the first step, inspired by the observation that a response can usually be supported by a single KB row, we introduce a KB retrieval component which explicitly returns the most relevant KB row given a dialogue history. The retrieval result is further used to filter the irrelevant entities in a Seq2Seq response generation model to improve the consistency among the output entities. In the second step, we further perform the attention mechanism to address the most correlated KB column. Two methods are proposed to make the training feasible without labeled retrieval data, which include distant supervision and Gumbel-Softmax technique. Experiments on two publicly available task oriented dialog datasets show the effectiveness of our model by outperforming the baseline systems and producing entity-consistent responses.
<<</Abstract>>>
<<<Introduction>>>
Task-oriented dialogue system, which helps users to achieve specific goals with natural language, is attracting more and more research attention. With the success of the sequence-to-sequence (Seq2Seq) models in text generation BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, several works tried to model the task-oriented dialogue as the Seq2Seq generation of response from the dialogue history BIBREF5, BIBREF6, BIBREF7. This kind of modeling scheme frees the task-oriented dialogue system from the manually designed pipeline modules and heavy annotation labor for these modules.
Different from typical text generation, the successful conversations for task-oriented dialogue system heavily depend on accurate knowledge base (KB) queries. Taking the dialogue in Figure FIGREF1 as an example, to answer the driver's query on the gas station, the dialogue system is required to retrieve the entities like “200 Alester Ave” and “Valero”. For the task-oriented system based on Seq2Seq generation, there is a trend in recent study towards modeling the KB query as an attention network over the entire KB entity representations, hoping to learn a model to pay more attention to the relevant entities BIBREF6, BIBREF7, BIBREF8, BIBREF9. Though achieving good end-to-end dialogue generation with over-the-entire-KB attention mechanism, these methods do not guarantee the generation consistency regarding KB entities and sometimes yield responses with conflict entities, like “Valero is located at 899 Ames Ct” for the gas station query (as shown in Figure FIGREF1). In fact, the correct address for Valero is 200 Alester Ave. A consistent response is relatively easy to achieve for the conventional pipeline systems because they query the KB by issuing API calls BIBREF10, BIBREF11, BIBREF12, and the returned entities, which typically come from a single KB row, are consistently related to the object (like the “gas station”) that serves the user's request. This indicates that a response can usually be supported by a single KB row. It's promising to incorporate such observation into the Seq2Seq dialogue generation model, since it encourages KB relevant generation and avoids the model from producing responses with conflict entities.
To achieve entity-consistent generation in the Seq2Seq task-oriented dialogue system, we propose a novel framework which query the KB in two steps. In the first step, we introduce a retrieval module — KB-retriever to explicitly query the KB. Inspired by the observation that a single KB row usually supports a response, given the dialogue history and a set of KB rows, the KB-retriever uses a memory network BIBREF13 to select the most relevant row. The retrieval result is then fed into a Seq2Seq dialogue generation model to filter the irrelevant KB entities and improve the consistency within the generated entities. In the second step, we further perform attention mechanism to address the most correlated KB column. Finally, we adopt the copy mechanism to incorporate the retrieved KB entity.
Since dialogue dataset is not typically annotated with the retrieval results, training the KB-retriever is non-trivial. To make the training feasible, we propose two methods: 1) we use a set of heuristics to derive the training data and train the retriever in a distant supervised fashion; 2) we use Gumbel-Softmax BIBREF14 as an approximation of the non-differentiable selecting process and train the retriever along with the Seq2Seq dialogue generation model. Experiments on two publicly available datasets (Camrest BIBREF11 and InCar Assistant BIBREF6) confirm the effectiveness of the KB-retriever. Both the retrievers trained with distant-supervision and Gumbel-Softmax technique outperform the compared systems in the automatic and human evaluations. Analysis empirically verifies our assumption that more than 80% responses in the dataset can be supported by a single KB row and better retrieval results lead to better task-oriented dialogue generation performance.
<<</Introduction>>>
<<<Definition>>>
In this section, we will describe the input and output of the end-to-end task-oriented dialogue system, and the definition of Seq2Seq task-oriented dialogue generation.
<<<Dialogue History>>>
Given a dialogue between a user ($u$) and a system ($s$), we follow eric:2017:SIGDial and represent the $k$-turned dialogue utterances as $\lbrace (u_{1}, s_{1} ), (u_{2} , s_{2} ), ... , (u_{k}, s_{k})\rbrace $. At the $i^{\text{th}}$ turn of the dialogue, we aggregate dialogue context which consists of the tokens of $(u_{1}, s_{1}, ..., s_{i-1}, u_{i})$ and use $\mathbf {x} = (x_{1}, x_{2}, ..., x_{m})$ to denote the whole dialogue history word by word, where $m$ is the number of tokens in the dialogue history.
<<</Dialogue History>>>
<<<Knowledge Base>>>
In this paper, we assume to have the access to a relational-database-like KB $B$, which consists of $|\mathcal {R}|$ rows and $|\mathcal {C}|$ columns. The value of entity in the $j^{\text{th}}$ row and the $i^{\text{th}}$ column is noted as $v_{j, i}$.
<<</Knowledge Base>>>
<<<Seq2Seq Dialogue Generation>>>
We define the Seq2Seq task-oriented dialogue generation as finding the most likely response $\mathbf {y}$ according to the input dialogue history $\mathbf {x}$ and KB $B$. Formally, the probability of a response is defined as
where $y_t$ represents an output token.
<<</Seq2Seq Dialogue Generation>>>
<<</Definition>>>
<<<Our Framework>>>
In this section, we describe our framework for end-to-end task-oriented dialogues. The architecture of our framework is demonstrated in Figure FIGREF3, which consists of two major components including an memory network-based retriever and the seq2seq dialogue generation with KB Retriever. Our framework first uses the KB-retriever to select the most relevant KB row and further filter the irrelevant entities in a Seq2Seq response generation model to improve the consistency among the output entities. While in decoding, we further perform the attention mechanism to choose the most probable KB column. We will present the details of our framework in the following sections.
<<<Encoder>>>
In our encoder, we adopt the bidirectional LSTM BIBREF15 to encode the dialogue history $\mathbf {x}$, which captures temporal relationships within the sequence. The encoder first map the tokens in $\mathbf {x}$ to vectors with embedding function $\phi ^{\text{emb}}$, and then the BiLSTM read the vector forwardly and backwardly to produce context-sensitive hidden states $(\mathbf {h}_{1}, \mathbf {h}_2, ..., \mathbf {h}_{m})$ by repeatedly applying the recurrence $\mathbf {h}_{i}=\text{BiLSTM}\left( \phi ^{\text{emb}}\left( x_{i}\right) , \mathbf {h}_{i-1}\right)$.
<<</Encoder>>>
<<<Vanilla Attention-based Decoder>>>
Here, we follow eric:2017:SIGDial to adopt the attention-based decoder to generation the response word by word. LSTM is also used to represent the partially generated output sequence $(y_{1}, y_2, ...,y_{t-1})$ as $(\tilde{\mathbf {h}}_{1}, \tilde{\mathbf {h}}_2, ...,\tilde{\mathbf {h}}_t)$. For the generation of next token $y_t$, their model first calculates an attentive representation $\tilde{\mathbf {h}}^{^{\prime }}_t$ of the dialogue history as
Then, the concatenation of the hidden representation of the partially outputted sequence $\tilde{\mathbf {h}}_t$ and the attentive dialogue history representation $\tilde{\mathbf {h}}^{^{\prime }}_t$ are projected to the vocabulary space $\mathcal {V}$ by $U$ as
to calculate the score (logit) for the next token generation. The probability of next token $y_t$ is finally calculated as
<<</Vanilla Attention-based Decoder>>>
<<<Entity-Consistency Augmented Decoder>>>
As shown in section SECREF7, we can see that the generation of tokens are just based on the dialogue history attention, which makes the model ignorant to the KB entities. In this section, we present how to query the KB explicitly in two steps for improving the entity consistence, which first adopt the KB-retriever to select the most relevant KB row and the generation of KB entities from the entities-augmented decoder is constrained to the entities within the most probable row, thus improve the entity generation consistency. Next, we perform the column attention to select the most probable KB column. Finally, we show how to use the copy mechanism to incorporate the retrieved entity while decoding.
<<<KB Row Selection>>>
In our framework, our KB-retriever takes the dialogue history and KB rows as inputs and selects the most relevant row. This selection process resembles the task of selecting one word from the inputs to answer questions BIBREF13, and we use a memory network to model this process. In the following sections, we will first describe how to represent the inputs, then we will talk about our memory network-based retriever
<<<Dialogue History Representation:>>>
We encode the dialogue history by adopting the neural bag-of-words (BoW) followed the original paper BIBREF13. Each token in the dialogue history is mapped into a vector by another embedding function $\phi ^{\text{emb}^{\prime }}(x)$ and the dialogue history representation $\mathbf {q}$ is computed as the sum of these vectors: $\mathbf {q} = \sum ^{m}_{i=1} \phi ^{\text{emb}^{\prime }} (x_{i}) $.
<<</Dialogue History Representation:>>>
<<<KB Row Representation:>>>
In this section, we describe how to encode the KB row. Each KB cell is represented as the cell value $v$ embedding as $\mathbf {c}_{j, k} = \phi ^{\text{value}}(v_{j, k})$, and the neural BoW is also used to represent a KB row $\mathbf {r}_{j}$ as $\mathbf {r}_{j} = \sum _{k=1}^{|\mathcal {C}|} \mathbf {c}_{j,k}$.
<<</KB Row Representation:>>>
<<<Memory Network-Based Retriever:>>>
We model the KB retrieval process as selecting the row that most-likely supports the response generation. Memory network BIBREF13 has shown to be effective to model this kind of selection. For a $n$-hop memory network, the model keeps a set of input matrices $\lbrace R^{1}, R^{2}, ..., R^{n+1}\rbrace $, where each $R^{i}$ is a stack of $|\mathcal {R}|$ inputs $(\mathbf {r}^{i}_1, \mathbf {r}^{i}_2, ..., \mathbf {r}^{i}_{|\mathcal {R}|})$. The model also keeps query $\mathbf {q}^{1}$ as the input. A single hop memory network computes the probability $\mathbf {a}_j$ of selecting the $j^{\text{th}}$ input as
For the multi-hop cases, layers of single hop memory network are stacked and the query of the $(i+1)^{\text{th}}$ layer network is computed as
and the output of the last layer is used as the output of the whole network. For more details about memory network, please refer to the original paper BIBREF13.
After getting $\mathbf {a}$, we represent the retrieval results as a 0-1 matrix $T \in \lbrace 0, 1\rbrace ^{|\mathcal {R}|\times \mathcal {|C|}}$, where each element in $T$ is calculated as
In the retrieval result, $T_{j, k}$ indicates whether the entity in the $j^{\text{th}}$ row and the $k^{\text{th}}$ column is relevant to the final generation of the response. In this paper, we further flatten T to a 0-1 vector $\mathbf {t} \in \lbrace 0, 1\rbrace ^{|\mathcal {E}|}$ (where $|\mathcal {E}|$ equals $|\mathcal {R}|\times \mathcal {|C|}$) as our retrieval row results.
<<</Memory Network-Based Retriever:>>>
<<</KB Row Selection>>>
<<<KB Column Selection>>>
After getting the retrieved row result that indicates which KB row is the most relevant to the generation, we further perform column attention in decoding time to select the probable KB column. For our KB column selection, following the eric:2017:SIGDial we use the decoder hidden state $(\tilde{\mathbf {h}}_{1}, \tilde{\mathbf {h}}_2, ...,\tilde{\mathbf {h}}_t)$ to compute an attention score with the embedding of column attribute name. The attention score $\mathbf {c}\in R^{|\mathcal {E}|}$ then become the logits of the column be selected, which can be calculated as
where $\mathbf {c}_j$ is the attention score of the $j^{\text{th}}$ KB column, $\mathbf {k}_j$ is represented with the embedding of word embedding of KB column name. $W^{^{\prime }}_{1}$, $W^{^{\prime }}_{2}$ and $\mathbf {t}^{T}$ are trainable parameters of the model.
<<</KB Column Selection>>>
<<<Decoder with Retrieved Entity>>>
After the row selection and column selection, we can define the final retrieved KB entity score as the element-wise dot between the row retriever result and the column selection score, which can be calculated as
where the $v^{t}$ indicates the final KB retrieved entity score. Finally, we follow eric:2017:SIGDial to use copy mechanism to incorporate the retrieved entity, which can be defined as
where $\mathbf {o}_t$’s dimensionality is $ |\mathcal {V}|$ +$|\mathcal {E}|$. In $\mathbf {v}^t$ , lower $ |\mathcal {V}|$ is zero and the rest$|\mathcal {E}|$ is retrieved entity scores.
<<</Decoder with Retrieved Entity>>>
<<</Entity-Consistency Augmented Decoder>>>
<<</Our Framework>>>
<<<Training the KB-Retriever>>>
As mentioned in section SECREF9, we adopt the memory network to train our KB-retriever. However, in the Seq2Seq dialogue generation, the training data does not include the annotated KB row retrieval results, which makes supervised training the KB-retriever impossible. To tackle this problem, we propose two training methods for our KB-row-retriever. 1) In the first method, inspired by the recent success of distant supervision in information extraction BIBREF16, BIBREF17, BIBREF18, BIBREF19, we take advantage of the similarity between the surface string of KB entries and the reference response, and design a set of heuristics to extract training data for the KB-retriever. 2) In the second method, instead of training the KB-retriever as an independent component, we train it along with the training of the Seq2Seq dialogue generation. To make the retrieval process in Equation DISPLAY_FORM13 differentiable, we use Gumbel-Softmax BIBREF14 as an approximation of the $\operatornamewithlimits{argmax}$ during training.
<<<Training with Distant Supervision>>>
Although it's difficult to obtain the annotated retrieval data for the KB-retriever, we can “guess” the most relevant KB row from the reference response, and then obtain the weakly labeled data for the retriever. Intuitively, for the current utterance in the same dialogue which usually belongs to one topic and the KB row that contains the largest number of entities mentioned in the whole dialogue should support the utterance. In our training with distant supervision, we further simplify our assumption and assume that one dialogue which is usually belongs to one topic and can be supported by the most relevant KB row, which means for a $k$-turned dialogue, we construct $k$ pairs of training instances for the retriever and all the inputs $(u_{1}, s_{1}, ..., s_{i-1}, u_{i} \mid i \le k)$ are associated with the same weakly labeled KB retrieval result $T^*$.
In this paper, we compute each row's similarity to the whole dialogue and choose the most similar row as $T^*$. We define the similarity of each row as the number of matched spans with the surface form of the entities in the row. Taking the dialogue in Figure FIGREF1 for an example, the similarity of the 4$^\text{th}$ row equals to 4 with “200 Alester Ave”, “gas station”, “Valero”, and “road block nearby” matching the dialogue context; and the similarity of the 7$^\text{th}$ row equals to 1 with only “road block nearby” matching.
In our model with the distantly supervised retriever, the retrieval results serve as the input for the Seq2Seq generation. During training the Seq2Seq generation, we use the weakly labeled retrieval result $T^{*}$ as the input.
<<</Training with Distant Supervision>>>
<<<Training with Gumbel-Softmax>>>
In addition to treating the row retrieval result as an input to the generation model, and training the kb-row-retriever independently, we can train it along with the training of the Seq2Seq dialogue generation in an end-to-end fashion. The major difficulty of such a training scheme is that the discrete retrieval result is not differentiable and the training signal from the generation model cannot be passed to the parameters of the retriever. Gumbel-softmax technique BIBREF14 has been shown an effective approximation to the discrete variable and proved to work in sentence representation. In this paper, we adopt the Gumbel-Softmax technique to train the KB retriever. We use
as the approximation of $T$, where $\mathbf {g}_{j}$ are i.i.d samples drawn from $\text{Gumbel}(0,1)$ and $\tau $ is a constant that controls the smoothness of the distribution. $T^{\text{approx}}_{j}$ replaces $T^{\text{}}_{j}$ in equation DISPLAY_FORM13 and goes through the same flattening and expanding process as $\mathbf {V}$ to get $\mathbf {v}^{\mathbf {t}^{\text{approx}^{\prime }}}$ and the training signal from Seq2Seq generation is passed via the logit
To make training with Gumbel-Softmax more stable, we first initialize the parameters by pre-training the KB-retriever with distant supervision and further fine-tuning our framework.
<<</Training with Gumbel-Softmax>>>
<<<Experimental Settings>>>
We choose the InCar Assistant dataset BIBREF6 including three distinct domains: navigation, weather and calendar domain. For weather domain, we follow wen2018sequence to separate the highest temperature, lowest temperature and weather attribute into three different columns. For calendar domain, there are some dialogues without a KB or incomplete KB. In this case, we padding a special token “-” in these incomplete KBs. Our framework is trained separately in these three domains, using the same train/validation/test split sets as eric:2017:SIGDial. To justify the generalization of the proposed model, we also use another public CamRest dataset BIBREF11 and partition the datasets into training, validation and testing set in the ratio 3:1:1. Especially, we hired some human experts to format the CamRest dataset by equipping the corresponding KB to every dialogues.
All hyper-parameters are selected according to validation set. We use a three-hop memory network to model our KB-retriever. The dimensionalities of the embedding is selected from $\lbrace 100, 200\rbrace $ and LSTM hidden units is selected from $\lbrace 50, 100, 150, 200, 350\rbrace $. The dropout we use in our framework is selected from $\lbrace 0.25, 0.5, 0.75\rbrace $ and the batch size we adopt is selected from $\lbrace 1,2\rbrace $. L2 regularization is used on our model with a tension of $5\times 10^{-6}$ for reducing overfitting. For training the retriever with distant supervision, we adopt the weight typing trick BIBREF20. We use Adam BIBREF21 to optimize the parameters in our model and adopt the suggested hyper-parameters for optimization.
We adopt both the automatic and human evaluations in our experiments.
<<</Experimental Settings>>>
<<<Baseline Models>>>
We compare our model with several baselines including:
Attn seq2seq BIBREF22: A model with simple attention over the input context at each time step during decoding.
Ptr-UNK BIBREF23: Ptr-UNK is the model which augments a sequence-to-sequence architecture with attention-based copy mechanism over the encoder context.
KV Net BIBREF6: The model adopted and argumented decoder which decodes over the concatenation of vocabulary and KB entities, which allows the model to generate entities.
Mem2Seq BIBREF7: Mem2Seq is the model that takes dialogue history and KB entities as input and uses a pointer gate to control either generating a vocabulary word or selecting an input as the output.
DSR BIBREF9: DSR leveraged dialogue state representation to retrieve the KB implicitly and applied copying mechanism to retrieve entities from knowledge base while decoding.
In InCar dataset, for the Attn seq2seq, Ptr-UNK and Mem2seq, we adopt the reported results from madotto2018mem2seq. In CamRest dataset, for the Mem2Seq, we adopt their open-sourced code to get the results while for the DSR, we run their code on the same dataset to obtain the results.
<<</Baseline Models>>>
<<</Training the KB-Retriever>>>
<<<Results>>>
Follow the prior works BIBREF6, BIBREF7, BIBREF9, we adopt the BLEU and the Micro Entity F1 to evaluate our model performance. The experimental results are illustrated in Table TABREF30.
In the first block of Table TABREF30, we show the Human, rule-based and KV Net (with*) result which are reported from eric:2017:SIGDial. We argue that their results are not directly comparable because their work uses the entities in thier canonicalized forms, which are not calculated based on real entity value. It's noticing that our framework with two methods still outperform KV Net in InCar dataset on whole BLEU and Entity F metrics, which demonstrates the effectiveness of our framework.
In the second block of Table TABREF30, we can see that our framework trained with both the distant supervision and the Gumbel-Softmax beats all existing models on two datasets. Our model outperforms each baseline on both BLEU and F1 metrics. In InCar dataset, Our model with Gumbel-Softmax has the highest BLEU compared with baselines, which which shows that our framework can generate more fluent response. Especially, our framework has achieved 2.5% improvement on navigate domain, 1.8% improvement on weather domain and 3.5% improvement on calendar domain on F1 metric. It indicates that the effectiveness of our KB-retriever module and our framework can retrieve more correct entity from KB. In CamRest dataset, the same trend of improvement has been witnessed, which further show the effectiveness of our framework.
Besides, we observe that the model trained with Gumbel-Softmax outperforms with distant supervision method. We attribute this to the fact that the KB-retriever and the Seq2Seq module are fine-tuned in an end-to-end fashion, which can refine the KB-retriever and further promote the dialogue generation.
<<<The proportion of responses that can be supported by a single KB row>>>
In this section, we verify our assumption by examining the proportion of responses that can be supported by a single row.
We define a response being supported by the most relevant KB row as all the responded entities are included by that row. We study the proportion of these responses over the test set. The number is 95% for the navigation domain, 90% for the CamRest dataset and 80% for the weather domain. This confirms our assumption that most responses can be supported by the relevant KB row. Correctly retrieving the supporting row should be beneficial.
We further study the weather domain to see the rest 20% exceptions. Instead of being supported by multiple rows, most of these exceptions cannot be supported by any KB row. For example, there is one case whose reference response is “It 's not rainy today”, and the related KB entity is sunny. These cases provide challenges beyond the scope of this paper. If we consider this kind of cases as being supported by a single row, such proportion in the weather domain is 99%.
<<</The proportion of responses that can be supported by a single KB row>>>
<<<Generation Consistency>>>
In this paper, we expect the consistent generation from our model. To verify this, we compute the consistency recall of the utterances that have multiple entities. An utterance is considered as consistent if it has multiple entities and these entities belong to the same row which we annotated with distant supervision.
The consistency result is shown in Table TABREF37. From this table, we can see that incorporating retriever in the dialogue generation improves the consistency.
<<</Generation Consistency>>>
<<<Correlation between the number of KB rows and generation consistency>>>
To further explore the correlation between the number of KB rows and generation consistency, we conduct experiments with distant manner to study the correlation between the number of KB rows and generation consistency.
We choose KBs with different number of rows on a scale from 1 to 5 for the generation. From Figure FIGREF32, as the number of KB rows increase, we can see a decrease in generation consistency. This indicates that irrelevant information would harm the dialogue generation consistency.
<<</Correlation between the number of KB rows and generation consistency>>>
<<<Visualization>>>
To gain more insights into how the our retriever module influences the whole KB score distribution, we visualized the KB entity probability at the decoding position where we generate the entity 200_Alester_Ave. From the example (Fig FIGREF38), we can see the $4^\text{th}$ row and the $1^\text{th}$ column has the highest probabilities for generating 200_Alester_Ave, which verify the effectiveness of firstly selecting the most relevant KB row and further selecting the most relevant KB column.
<<</Visualization>>>
<<<Human Evaluation>>>
We provide human evaluation on our framework and the compared models. These responses are based on distinct dialogue history. We hire several human experts and ask them to judge the quality of the responses according to correctness, fluency, and humanlikeness on a scale from 1 to 5. In each judgment, the expert is presented with the dialogue history, an output of a system with the name anonymized, and the gold response.
The evaluation results are illustrated in Table TABREF37. Our framework outperforms other baseline models on all metrics according to Table TABREF37. The most significant improvement is from correctness, indicating that our model can retrieve accurate entity from KB and generate more informative information that the users want to know.
<<</Human Evaluation>>>
<<</Results>>>
<<<Related Work>>>
Sequence-to-sequence (Seq2Seq) models in text generation BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 has gained more popular and they are applied for the open-domain dialogs BIBREF24, BIBREF25 in the end-to-end training method. Recently, the Seq2Seq can be used for learning task oriented dialogs and how to query the structured KB is the remaining challenges.
Properly querying the KB has long been a challenge in the task-oriented dialogue system. In the pipeline system, the KB query is strongly correlated with the design of language understanding, state tracking, and policy management. Typically, after obtaining the dialogue state, the policy management module issues an API call accordingly to query the KB. With the development of neural network in natural language processing, efforts have been made to replacing the discrete and pre-defined dialogue state with the distributed representation BIBREF10, BIBREF11, BIBREF12, BIBREF26. In our framework, our retrieval result can be treated as a numeric representation of the API call return.
Instead of interacting with the KB via API calls, more and more recent works tried to incorporate KB query as a part of the model. The most popular way of modeling KB query is treating it as an attention network over the entire KB entities BIBREF6, BIBREF27, BIBREF8, BIBREF28, BIBREF29 and the return can be a fuzzy summation of the entity representations. madotto2018mem2seq's practice of modeling the KB query with memory network can also be considered as learning an attentive preference over these entities. wen2018sequence propose the implicit dialogue state representation to query the KB and achieve the promising performance. Different from their modes, we propose the KB-retriever to explicitly query the KB, and the query result is used to filter the irrelevant entities in the dialogue generation to improve the consistency among the output entities.
<<</Related Work>>>
<<<Conclusion>>>
In this paper, we propose a novel framework to improve entities consistency by querying KB in two steps. In the first step, inspired by the observation that a response can usually be supported by a single KB row, we introduce the KB retriever to return the most relevant KB row, which is used to filter the irrelevant KB entities and encourage consistent generation. In the second step, we further perform attention mechanism to select the most relevant KB column. Experimental results show the effectiveness of our method. Extensive analysis further confirms the observation and reveal the correlation between the success of KB query and the success of task-oriented dialogue generation.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nDefinition\nDialogue History\nKnowledge Base\nSeq2Seq Dialogue Generation\nOur Framework\nEncoder\nVanilla Attention-based Decoder\nEntity-Consistency Augmented Decoder\nKB Row Selection\nDialogue History Representation:\nKB Row Representation:\nMemory Network-Based Retriever:\nKB Column Selection\nDecoder with Retrieved Entity\nTraining the KB-Retriever\nTraining with Distant Supervision\nTraining with Gumbel-Softmax\nExperimental Settings\nBaseline Models\nResults\nThe proportion of responses that can be supported by a single KB row\nGeneration Consistency\nCorrelation between the number of KB rows and generation consistency\nVisualization\nHuman Evaluation\nRelated Work\nConclusion"
],
"type": "outline"
}
|
1911.00069
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Neural Cross-Lingual Relation Extraction Based on Bilingual Word Embedding Mapping
<<<Abstract>>>
Relation extraction (RE) seeks to detect and classify semantic relationships between entities, which provides useful information for many NLP applications. Since the state-of-the-art RE models require large amounts of manually annotated data and language-specific resources to achieve high accuracy, it is very challenging to transfer an RE model of a resource-rich language to a resource-poor language. In this paper, we propose a new approach for cross-lingual RE model transfer based on bilingual word embedding mapping. It projects word embeddings from a target language to a source language, so that a well-trained source-language neural network RE model can be directly applied to the target language. Experiment results show that the proposed approach achieves very good performance for a number of target languages on both in-house and open datasets, using a small bilingual dictionary with only 1K word pairs.
<<</Abstract>>>
<<<Introduction>>>
Relation extraction (RE) is an important information extraction task that seeks to detect and classify semantic relationships between entities like persons, organizations, geo-political entities, locations, and events. It provides useful information for many NLP applications such as knowledge base construction, text mining and question answering. For example, the entity Washington, D.C. and the entity United States have a CapitalOf relationship, and extraction of such relationships can help answer questions like “What is the capital city of the United States?"
Traditional RE models (e.g., BIBREF0, BIBREF1, BIBREF2) require careful feature engineering to derive and combine various lexical, syntactic and semantic features. Recently, neural network RE models (e.g., BIBREF3, BIBREF4, BIBREF5, BIBREF6) have become very successful. These models employ a certain level of automatic feature learning by using word embeddings, which significantly simplifies the feature engineering task while considerably improving the accuracy, achieving the state-of-the-art performance for relation extraction.
All the above RE models are supervised machine learning models that need to be trained with large amounts of manually annotated RE data to achieve high accuracy. However, annotating RE data by human is expensive and time-consuming, and can be quite difficult for a new language. Moreover, most RE models require language-specific resources such as dependency parsers and part-of-speech (POS) taggers, which also makes it very challenging to transfer an RE model of a resource-rich language to a resource-poor language.
There are a few existing weakly supervised cross-lingual RE approaches that require no human annotation in the target languages, e.g., BIBREF7, BIBREF8, BIBREF9, BIBREF10. However, the existing approaches require aligned parallel corpora or machine translation systems, which may not be readily available in practice.
In this paper, we make the following contributions to cross-lingual RE:
We propose a new approach for direct cross-lingual RE model transfer based on bilingual word embedding mapping. It projects word embeddings from a target language to a source language (e.g., English), so that a well-trained source-language RE model can be directly applied to the target language, with no manually annotated RE data needed for the target language.
We design a deep neural network architecture for the source-language (English) RE model that uses word embeddings and generic language-independent features as the input. The English RE model achieves the-state-of-the-art performance without using language-specific resources.
We conduct extensive experiments which show that the proposed approach achieves very good performance (up to $79\%$ of the accuracy of the supervised target-language RE model) for a number of target languages on both in-house and the ACE05 datasets BIBREF11, using a small bilingual dictionary with only 1K word pairs. To the best of our knowledge, this is the first work that includes empirical studies for cross-lingual RE on several languages across a variety of language families, without using aligned parallel corpora or machine translation systems.
We organize the paper as follows. In Section 2 we provide an overview of our approach. In Section 3 we describe how to build monolingual word embeddings and learn a linear mapping between two languages. In Section 4 we present a neural network architecture for the source-language (English). In Section 5 we evaluate the performance of the proposed approach for a number of target languages. We discuss related work in Section 6 and conclude the paper in Section 7.
<<</Introduction>>>
<<<Overview of the Approach>>>
We summarize the main steps of our neural cross-lingual RE model transfer approach as follows.
Build word embeddings for the source language and the target language separately using monolingual data.
Learn a linear mapping that projects the target-language word embeddings into the source-language embedding space using a small bilingual dictionary.
Build a neural network source-language RE model that uses word embeddings and generic language-independent features as the input.
For a target-language sentence and any two entities in it, project the word embeddings of the words in the sentence to the source-language word embeddings using the linear mapping, and then apply the source-language RE model on the projected word embeddings to classify the relationship between the two entities. An example is shown in Figure FIGREF4, where the target language is Portuguese and the source language is English.
We will describe each component of our approach in the subsequent sections.
<<</Overview of the Approach>>>
<<<Cross-Lingual Word Embeddings>>>
In recent years, vector representations of words, known as word embeddings, become ubiquitous for many NLP applications BIBREF12, BIBREF13, BIBREF14.
A monolingual word embedding model maps words in the vocabulary $\mathcal {V}$ of a language to real-valued vectors in $\mathbb {R}^{d\times 1}$. The dimension of the vector space $d$ is normally much smaller than the size of the vocabulary $V=|\mathcal {V}|$ for efficient representation. It also aims to capture semantic similarities between the words based on their distributional properties in large samples of monolingual data.
Cross-lingual word embedding models try to build word embeddings across multiple languages BIBREF15, BIBREF16. One approach builds monolingual word embeddings separately and then maps them to the same vector space using a bilingual dictionary BIBREF17, BIBREF18. Another approach builds multilingual word embeddings in a shared vector space simultaneously, by generating mixed language corpora using aligned sentences BIBREF19, BIBREF20.
In this paper, we adopt the technique in BIBREF17 because it only requires a small bilingual dictionary of aligned word pairs, and does not require parallel corpora of aligned sentences which could be more difficult to obtain.
<<<Monolingual Word Embeddings>>>
To build monolingual word embeddings for the source and target languages, we use a variant of the Continuous Bag-of-Words (CBOW) word2vec model BIBREF13.
The standard CBOW model has two matrices, the input word matrix $\tilde{\mathbf {X}} \in \mathbb {R}^{d\times V}$ and the output word matrix $\mathbf {X} \in \mathbb {R}^{d\times V}$. For the $i$th word $w_i$ in $\mathcal {V}$, let $\mathbf {e}(w_i) \in \mathbb {R}^{V \times 1}$ be a one-hot vector with 1 at index $i$ and 0s at other indexes, so that $\tilde{\mathbf {x}}_i = \tilde{\mathbf {X}}\mathbf {e}(w_i)$ (the $i$th column of $\tilde{\mathbf {X}}$) is the input vector representation of word $w_i$, and $\mathbf {x}_i = \mathbf {X}\mathbf {e}(w_i)$ (the $i$th column of $\mathbf {X}$) is the output vector representation (i.e., word embedding) of word $w_i$.
Given a sequence of training words $w_1, w_2, ..., w_N$, the CBOW model seeks to predict a target word $w_t$ using a window of $2c$ context words surrounding $w_t$, by maximizing the following objective function:
The conditional probability is calculated using a softmax function:
where $\mathbf {x}_t=\mathbf {X}\mathbf {e}(w_t)$ is the output vector representation of word $w_t$, and
is the sum of the input vector representations of the context words.
In our variant of the CBOW model, we use a separate input word matrix $\tilde{\mathbf {X}}_j$ for a context word at position $j, -c \le j \le c, j\ne 0$. In addition, we employ weights that decay with the distances of the context words to the target word. Under these modifications, we have
We use the variant to build monolingual word embeddings because experiments on named entity recognition and word similarity tasks showed this variant leads to small improvements over the standard CBOW model BIBREF21.
<<</Monolingual Word Embeddings>>>
<<<Bilingual Word Embedding Mapping>>>
BIBREF17 observed that word embeddings of different languages often have similar geometric arrangements, and suggested to learn a linear mapping between the vector spaces.
Let $\mathcal {D}$ be a bilingual dictionary with aligned word pairs ($w_i, v_i)_{i=1,...,D}$ between a source language $s$ and a target language $t$, where $w_i$ is a source-language word and $v_i$ is the translation of $w_i$ in the target language. Let $\mathbf {x}_i \in \mathbb {R}^{d \times 1}$ be the word embedding of the source-language word $w_i$, $\mathbf {y}_i \in \mathbb {R}^{d \times 1}$ be the word embedding of the target-language word $v_i$.
We find a linear mapping (matrix) $\mathbf {M}_{t\rightarrow s}$ such that $\mathbf {M}_{t\rightarrow s}\mathbf {y}_i$ approximates $\mathbf {x}_i$, by solving the following least squares problem using the dictionary as the training set:
Using $\mathbf {M}_{t\rightarrow s}$, for any target-language word $v$ with word embedding $\mathbf {y}$, we can project it into the source-language embedding space as $\mathbf {M}_{t\rightarrow s}\mathbf {y}$.
<<<Length Normalization and Orthogonal Transformation>>>
To ensure that all the training instances in the dictionary $\mathcal {D}$ contribute equally to the optimization objective in (DISPLAY_FORM14) and to preserve vector norms after projection, we have tried length normalization and orthogonal transformation for learning the bilingual mapping as in BIBREF22, BIBREF23, BIBREF24.
First, we normalize the source-language and target-language word embeddings to be unit vectors: $\mathbf {x}^{\prime }=\frac{\mathbf {x}}{||\mathbf {x}||}$ for each source-language word embedding $\mathbf {x}$, and $\mathbf {y}^{\prime }= \frac{\mathbf {y}}{||\mathbf {y}||}$ for each target-language word embedding $\mathbf {y}$.
Next, we add an orthogonality constraint to (DISPLAY_FORM14) such that $\mathbf {M}$ is an orthogonal matrix, i.e., $\mathbf {M}^\mathrm {T}\mathbf {M} = \mathbf {I}$ where $\mathbf {I}$ denotes the identity matrix:
$\mathbf {M}^{O} _{t\rightarrow s}$ can be computed using singular-value decomposition (SVD).
<<</Length Normalization and Orthogonal Transformation>>>
<<<Semi-Supervised and Unsupervised Mappings>>>
The mapping learned in (DISPLAY_FORM14) or (DISPLAY_FORM16) requires a seed dictionary. To relax this requirement, BIBREF25 proposed a self-learning procedure that can be combined with a dictionary-based mapping technique. Starting with a small seed dictionary, the procedure iteratively 1) learns a mapping using the current dictionary; and 2) computes a new dictionary using the learned mapping.
BIBREF26 proposed an unsupervised method to learn the bilingual mapping without using a seed dictionary. The method first uses a heuristic to build an initial dictionary that aligns the vocabularies of two languages, and then applies a robust self-learning procedure to iteratively improve the mapping. Another unsupervised method based on adversarial training was proposed in BIBREF27.
We compare the performance of different mappings for cross-lingual RE model transfer in Section SECREF45.
<<</Semi-Supervised and Unsupervised Mappings>>>
<<</Bilingual Word Embedding Mapping>>>
<<</Cross-Lingual Word Embeddings>>>
<<<Neural Network RE Models>>>
For any two entities in a sentence, an RE model determines whether these two entities have a relationship, and if yes, classifies the relationship into one of the pre-defined relation types. We focus on neural network RE models since these models achieve the state-of-the-art performance for relation extraction. Most importantly, neural network RE models use word embeddings as the input, which are amenable to cross-lingual model transfer via cross-lingual word embeddings. In this paper, we use English as the source language.
Our neural network architecture has four layers. The first layer is the embedding layer which maps input words in a sentence to word embeddings. The second layer is a context layer which transforms the word embeddings to context-aware vector representations using a recurrent or convolutional neural network layer. The third layer is a summarization layer which summarizes the vectors in a sentence by grouping and pooling. The final layer is the output layer which returns the classification label for the relation type.
<<<Embedding Layer>>>
For an English sentence with $n$ words $\mathbf {s}=(w_1,w_2,...,w_n)$, the embedding layer maps each word $w_t$ to a real-valued vector (word embedding) $\mathbf {x}_t\in \mathbb {R}^{d \times 1}$ using the English word embedding model (Section SECREF9). In addition, for each entity $m$ in the sentence, the embedding layer maps its entity type to a real-valued vector (entity label embedding) $\mathbf {l}_m \in \mathbb {R}^{d_m \times 1}$ (initialized randomly). In our experiments we use $d=300$ and $d_m = 50$.
<<</Embedding Layer>>>
<<<Context Layer>>>
Given the word embeddings $\mathbf {x}_t$'s of the words in the sentence, the context layer tries to build a sentence-context-aware vector representation for each word. We consider two types of neural network layers that aim to achieve this.
<<<Bi-LSTM Context Layer>>>
The first type of context layer is based on Long Short-Term Memory (LSTM) type recurrent neural networks BIBREF28, BIBREF29. Recurrent neural networks (RNNs) are a class of neural networks that operate on sequential data such as sequences of words. LSTM networks are a type of RNNs that have been invented to better capture long-range dependencies in sequential data.
We pass the word embeddings $\mathbf {x}_t$'s to a forward and a backward LSTM layer. A forward or backward LSTM layer consists of a set of recurrently connected blocks known as memory blocks. The memory block at the $t$-th word in the forward LSTM layer contains a memory cell $\overrightarrow{\mathbf {c}}_t$ and three gates: an input gate $\overrightarrow{\mathbf {i}}_t$, a forget gate $\overrightarrow{\mathbf {f}}_t$ and an output gate $\overrightarrow{\mathbf {o}}_t$ ($\overrightarrow{\cdot }$ indicates the forward direction), which are updated as follows:
where $\sigma $ is the element-wise sigmoid function and $\odot $ is the element-wise multiplication.
The hidden state vector $\overrightarrow{\mathbf {h}}_t$ in the forward LSTM layer incorporates information from the left (past) tokens of $w_t$ in the sentence. Similarly, we can compute the hidden state vector $\overleftarrow{\mathbf {h}}_t$ in the backward LSTM layer, which incorporates information from the right (future) tokens of $w_t$ in the sentence. The concatenation of the two vectors $\mathbf {h}_t = [\overrightarrow{\mathbf {h}}_t, \overleftarrow{\mathbf {h}}_t]$ is a good representation of the word $w_t$ with both left and right contextual information in the sentence.
<<</Bi-LSTM Context Layer>>>
<<<CNN Context Layer>>>
The second type of context layer is based on Convolutional Neural Networks (CNNs) BIBREF3, BIBREF4, which applies convolution-like operation on successive windows of size $k$ around each word in the sentence. Let $\mathbf {z}_t = [\mathbf {x}_{t-(k-1)/2},...,\mathbf {x}_{t+(k-1)/2}]$ be the concatenation of $k$ word embeddings around $w_t$. The convolutional layer computes a hidden state vector
for each word $w_t$, where $\mathbf {W}$ is a weight matrix and $\mathbf {b}$ is a bias vector, and $\tanh (\cdot )$ is the element-wise hyperbolic tangent function.
<<</CNN Context Layer>>>
<<</Context Layer>>>
<<<Summarization Layer>>>
After the context layer, the sentence $(w_1,w_2,...,w_n)$ is represented by $(\mathbf {h}_1,....,\mathbf {h}_n)$. Suppose $m_1=(w_{b_1},..,w_{e_1})$ and $m_2=(w_{b_2},..,w_{e_2})$ are two entities in the sentence where $m_1$ is on the left of $m_2$ (i.e., $e_1 < b_2$). As different sentences and entities may have various lengths, the summarization layer tries to build a fixed-length vector that best summarizes the representations of the sentence and the two entities for relation type classification.
We divide the hidden state vectors $\mathbf {h}_t$'s into 5 groups:
$G_1=\lbrace \mathbf {h}_{1},..,\mathbf {h}_{b_1-1}\rbrace $ includes vectors that are left to the first entity $m_1$.
$G_2=\lbrace \mathbf {h}_{b_1},..,\mathbf {h}_{e_1}\rbrace $ includes vectors that are in the first entity $m_1$.
$G_3=\lbrace \mathbf {h}_{e_1+1},..,\mathbf {h}_{b_2-1}\rbrace $ includes vectors that are between the two entities.
$G_4=\lbrace \mathbf {h}_{b_2},..,\mathbf {h}_{e_2}\rbrace $ includes vectors that are in the second entity $m_2$.
$G_5=\lbrace \mathbf {h}_{e_2+1},..,\mathbf {h}_{n}\rbrace $ includes vectors that are right to the second entity $m_2$.
We perform element-wise max pooling among the vectors in each group:
where $d_h$ is the dimension of the hidden state vectors. Concatenating the $\mathbf {h}_{G_i}$'s we get a fixed-length vector $\mathbf {h}_s=[\mathbf {h}_{G_1},...,\mathbf {h}_{G_5}]$.
<<</Summarization Layer>>>
<<<Output Layer>>>
The output layer receives inputs from the previous layers (the summarization vector $\mathbf {h}_s$, the entity label embeddings $\mathbf {l}_{m_1}$ and $\mathbf {l}_{m_2}$ for the two entities under consideration) and returns a probability distribution over the relation type labels:
<<</Output Layer>>>
<<<Cross-Lingual RE Model Transfer>>>
Given the word embeddings of a sequence of words in a target language $t$, $(\mathbf {y}_1,...,\mathbf {y}_n)$, we project them into the English embedding space by applying the linear mapping $\mathbf {M}_{t\rightarrow s}$ learned in Section SECREF13: $(\mathbf {M}_{t\rightarrow s}\mathbf {y}_1, \mathbf {M}_{t\rightarrow s}\mathbf {y}_2,...,\mathbf {M}_{t\rightarrow s}\mathbf {y}_n)$. The neural network English RE model is then applied on the projected word embeddings and the entity label embeddings (which are language independent) to perform relationship classification.
Note that our models do not use language-specific resources such as dependency parsers or POS taggers because these resources might not be readily available for a target language. Also our models do not use precise word position features since word positions in sentences can vary a lot across languages.
<<</Cross-Lingual RE Model Transfer>>>
<<</Neural Network RE Models>>>
<<<Experiments>>>
In this section, we evaluate the performance of the proposed cross-lingual RE approach on both in-house dataset and the ACE (Automatic Content Extraction) 2005 multilingual dataset BIBREF11.
<<<Datasets>>>
Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. It defines 56 entity types (e.g., Person, Organization, Geo-Political Entity, Location, Facility, Time, Event_Violence, etc.) and 53 relation types between the entities (e.g., AgentOf, LocatedAt, PartOf, TimeOf, AffectedBy, etc.).
The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical).
For both datasets, we create a class label “O" to denote that the two entities under consideration do not have a relationship belonging to one of the relation types of interest.
<<</Datasets>>>
<<<Source (English) RE Model Performance>>>
We build 3 neural network English RE models under the architecture described in Section SECREF4:
The first neural network RE model does not have a context layer and the word embeddings are directly passed to the summarization layer. We call it Pass-Through for short.
The second neural network RE model has a Bi-LSTM context layer. We call it Bi-LSTM for short.
The third neural network model has a CNN context layer with a window size 3. We call it CNN for short.
First we compare our neural network English RE models with the state-of-the-art RE models on the ACE05 English data. The ACE05 English data can be divided to 6 different domains: broadcast conversation (bc), broadcast news (bn), telephone conversation (cts), newswire (nw), usenet (un) and webblogs (wl). We apply the same data split in BIBREF31, BIBREF30, BIBREF6, which uses news (the union of bn and nw) as the training set, a half of bc as the development set and the remaining data as the test set.
We learn the model parameters using Adam BIBREF32. We apply dropout BIBREF33 to the hidden layers to reduce overfitting. The development set is used for tuning the model hyperparameters and for early stopping.
In Table TABREF40 we compare our models with the best models in BIBREF30 and BIBREF6. Our Bi-LSTM model outperforms the best model (single or ensemble) in BIBREF30 and the best single model in BIBREF6, without using any language-specific resources such as dependency parsers.
While the data split in the previous works was motivated by domain adaptation, the focus of this paper is on cross-lingual model transfer, and hence we apply a random data split as follows. For the source language English and each target language, we randomly select $80\%$ of the data as the training set, $10\%$ as the development set, and keep the remaining $10\%$ as the test set. The sizes of the sets are summarized in Table TABREF41.
We report the Precision, Recall and $F_1$ score of the 3 neural network English RE models in Table TABREF42. Note that adding an additional context layer with either Bi-LSTM or CNN significantly improves the performance of our English RE model, compared with the simple Pass-Through model. Therefore, we will focus on the Bi-LSTM model and the CNN model in the subsequent experiments.
<<</Source (English) RE Model Performance>>>
<<<Cross-Lingual RE Performance>>>
We apply the English RE models to the 7 target languages across a variety of language families.
<<<Dictionary Size>>>
The bilingual dictionary includes the most frequent target-language words and their translations in English. To determine how many word pairs are needed to learn an effective bilingual word embedding mapping for cross-lingual RE, we first evaluate the performance ($F_1$ score) of our cross-lingual RE approach on the target-language development sets with an increasing dictionary size, as plotted in Figure FIGREF35.
We found that for most target languages, once the dictionary size reaches 1K, further increasing the dictionary size may not improve the transfer performance. Therefore, we select the dictionary size to be 1K.
<<</Dictionary Size>>>
<<<Comparison of Different Mappings>>>
We compare the performance of cross-lingual RE model transfer under the following bilingual word embedding mappings:
Regular-1K: the regular mapping learned in (DISPLAY_FORM14) using 1K word pairs;
Orthogonal-1K: the orthogonal mapping with length normalization learned in (DISPLAY_FORM16) using 1K word pairs (in this case we train the English RE models with the normalized English word embeddings);
Semi-Supervised-1K: the mapping learned with 1K word pairs and improved by the self-learning method in BIBREF25;
Unsupervised: the mapping learned by the unsupervised method in BIBREF26.
The results are summarized in Table TABREF46. The regular mapping outperforms the orthogonal mapping consistently across the target languages. While the orthogonal mapping was shown to work better than the regular mapping for the word translation task BIBREF22, BIBREF23, BIBREF24, our cross-lingual RE approach directly maps target-language word embeddings to the English embedding space without conducting word translations. Moreover, the orthogonal mapping requires length normalization, but we observed that length normalization adversely affects the performance of the English RE models (about 2.0 $F_1$ points drop).
We apply the vecmap toolkit to obtain the semi-supervised and unsupervised mappings. The unsupervised mapping has the lowest average accuracy over the target languages, but it does not require a seed dictionary. Among all the mappings, the regular mapping achieves the best average accuracy over the target languages using a dictionary with only 1K word pairs, and hence we adopt it for the cross-lingual RE task.
<<</Comparison of Different Mappings>>>
<<<Performance on Test Data>>>
The cross-lingual RE model transfer results for the in-house test data are summarized in Table TABREF52 and the results for the ACE05 test data are summarized in Table TABREF53, using the regular mapping learned with a bilingual dictionary of size 1K. In the tables, we also provide the performance of the supervised RE model (Bi-LSTM) for each target language, which is trained with a few hundred thousand tokens of manually annotated RE data in the target-language, and may serve as an upper bound for the cross-lingual model transfer performance.
Among the 2 neural network models, the Bi-LSTM model achieves a better cross-lingual RE performance than the CNN model for 6 out of the 7 target languages. In terms of absolute performance, the Bi-LSTM model achieves over $40.0$ $F_1$ scores for German, Spanish, Portuguese and Chinese. In terms of relative performance, it reaches over $75\%$ of the accuracy of the supervised target-language RE model for German, Spanish, Italian and Portuguese. While Japanese and Arabic appear to be more difficult to transfer, it still achieves $55\%$ and $52\%$ of the accuracy of the supervised Japanese and Arabic RE model, respectively, without using any manually annotated RE data in Japanese/Arabic.
We apply model ensemble to further improve the accuracy of the Bi-LSTM model. We train 5 Bi-LSTM English RE models initiated with different random seeds, apply the 5 models on the target languages, and combine the outputs by selecting the relation type labels with the highest probabilities among the 5 models. This Ensemble approach improves the single model by 0.6-1.9 $F_1$ points, except for Arabic.
<<</Performance on Test Data>>>
<<<Discussion>>>
Since our approach projects the target-language word embeddings to the source-language embedding space preserving the word order, it is expected to work better for a target language that has more similar word order as the source language. This has been verified by our experiments. The source language, English, belongs to the SVO (Subject, Verb, Object) language family where in a sentence the subject comes first, the verb second, and the object third. Spanish, Italian, Portuguese, German (in conventional typology) and Chinese also belong to the SVO language family, and our approach achieves over $70\%$ relative accuracy for these languages. On the other hand, Japanese belongs to the SOV (Subject, Object, Verb) language family and Arabic belongs to the VSO (Verb, Subject, Object) language family, and our approach achieves lower relative accuracy for these two languages.
<<</Discussion>>>
<<</Cross-Lingual RE Performance>>>
<<</Experiments>>>
<<<Related Work>>>
There are a few weakly supervised cross-lingual RE approaches. BIBREF7 and BIBREF8 project annotated English RE data to Korean to create weakly labeled training data via aligned parallel corpora. BIBREF9 translates a target-language sentence into English, performs RE in English, and then projects the relation phrases back to the target-language sentence. BIBREF10 proposes an adversarial feature adaptation approach for cross-lingual relation classification, which uses a machine translation system to translate source-language sentences into target-language sentences. Unlike the existing approaches, our approach does not require aligned parallel corpora or machine translation systems. There are also several multilingual RE approaches, e.g., BIBREF34, BIBREF35, BIBREF36, where the focus is to improve monolingual RE by jointly modeling texts in multiple languages.
Many cross-lingual word embedding models have been developed recently BIBREF15, BIBREF16. An important application of cross-lingual word embeddings is to enable cross-lingual model transfer. In this paper, we apply the bilingual word embedding mapping technique in BIBREF17 to cross-lingual RE model transfer. Similar approaches have been applied to other NLP tasks such as dependency parsing BIBREF37, POS tagging BIBREF38 and named entity recognition BIBREF21, BIBREF39.
<<</Related Work>>>
<<<Conclusion>>>
In this paper, we developed a simple yet effective neural cross-lingual RE model transfer approach, which has very low resource requirements (a small bilingual dictionary with 1K word pairs) and can be easily extended to a new language. Extensive experiments for 7 target languages across a variety of language families on both in-house and open datasets show that the proposed approach achieves very good performance (up to $79\%$ of the accuracy of the supervised target-language RE model), which provides a strong baseline for building cross-lingual RE models with minimal resources.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nOverview of the Approach\nCross-Lingual Word Embeddings\nMonolingual Word Embeddings\nBilingual Word Embedding Mapping\nLength Normalization and Orthogonal Transformation\nSemi-Supervised and Unsupervised Mappings\nNeural Network RE Models\nEmbedding Layer\nContext Layer\nBi-LSTM Context Layer\nCNN Context Layer\nSummarization Layer\nOutput Layer\nCross-Lingual RE Model Transfer\nExperiments\nDatasets\nSource (English) RE Model Performance\nCross-Lingual RE Performance\nDictionary Size\nComparison of Different Mappings\nPerformance on Test Data\nDiscussion\nRelated Work\nConclusion"
],
"type": "outline"
}
|
2001.01589
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Morphological Word Segmentation on Agglutinative Languages for Neural Machine Translation
<<<Abstract>>>
Neural machine translation (NMT) has achieved impressive performance on machine translation task in recent years. However, in consideration of efficiency, a limited-size vocabulary that only contains the top-N highest frequency words are employed for model training, which leads to many rare and unknown words. It is rather difficult when translating from the low-resource and morphologically-rich agglutinative languages, which have complex morphology and large vocabulary. In this paper, we propose a morphological word segmentation method on the source-side for NMT that incorporates morphology knowledge to preserve the linguistic and semantic information in the word structure while reducing the vocabulary size at training time. It can be utilized as a preprocessing tool to segment the words in agglutinative languages for other natural language processing (NLP) tasks. Experimental results show that our morphologically motivated word segmentation method is better suitable for the NMT model, which achieves significant improvements on Turkish-English and Uyghur-Chinese machine translation tasks on account of reducing data sparseness and language complexity.
<<</Abstract>>>
<<<Introduction>>>
Neural machine translation (NMT) has achieved impressive performance on machine translation task in recent years for many language pairs BIBREF0, BIBREF1, BIBREF2. However, in consideration of time cost and space capacity, the NMT model generally employs a limited-size vocabulary that only contains the top-N highest frequency words (commonly in the range of 30K to 80K) BIBREF3, which leads to the Out-of-Vocabulary (OOV) problem following with inaccurate and terrible translation results. Research indicated that sentences with too many unknown words tend to be translated much more poorly than sentences with mainly frequent words. For the low-resource and source-side morphologically-rich machine translation tasks, such as Turkish-English and Uyghur-Chinese, all the above issues are more serious due to the fact that the NMT model cannot effectively identify the complex morpheme structure or capture the linguistic and semantic information with too many rare and unknown words in the training corpus.
Both the Turkish and Uyghur are agglutinative and highly-inflected languages in which the word is formed by suffixes attaching to a stem BIBREF4. The word consists of smaller morpheme units without any splitter between them and its structure can be denoted as “stem + suffix1 + suffix2 + ... + suffixN”. A stem is attached in the rear by zero to many suffixes that have many inflected and morphological variants depending on case, number, gender, and so on. The complex morpheme structure and relatively free constituent order can produce very large vocabulary because of the derivational morphology, so when translating from the agglutinative languages, many words are unseen at training time. Moreover, due to the semantic context, the same word generally has different segmentation forms in the training corpus.
For the purpose of incorporating morphology knowledge of agglutinative languages into word segmentation for NMT, we propose a morphological word segmentation method on the source-side of Turkish-English and Uyghur-Chinese machine translation tasks, which segments the complex words into simple and effective morpheme units while reducing the vocabulary size for model training. In this paper, we investigate and compare the following segmentation strategies:
Stem with combined suffix
Stem with singular suffix
Byte Pair Encoding (BPE)
BPE on stem with combined suffix
BPE on stem with singular suffix
The latter two segmentation strategies are our newly proposed methods. Experimental results show that our morphologically motivated word segmentation method can achieve significant improvement of up to 1.2 and 2.5 BLEU points on Turkish-English and Uyghur-Chinese machine translation tasks over the strong baseline of pure BPE method respectively, indicating that it can provide better translation performance for the NMT model.
<<</Introduction>>>
<<<Approach>>>
We will elaborate two popular word segmentation methods and our newly proposed segmentation strategies in this section. The two popular segmentation methods are morpheme segmentation BIBREF4 and Byte Pair Encoding (BPE) BIBREF5. After word segmentation, we additionally add an specific symbol behind each separated subword unit, which aims to assist the NMT model to identify the morpheme boundaries and capture the semantic information effectively. The sentence examples with different segmentation strategies for Turkish-English machine translation task are shown in Table 1.
<<<Morpheme Segmentation>>>
The words of Turkish and Uyghur are formed by a stem followed with unlimited number of suffixes. Both of the stem and suffix are called morphemes, and they are the smallest functional unit in agglutinative languages. Study indicated that modeling language based on the morpheme units can provide better performance BIBREF6. Morpheme segmentation can segment the complex word into morpheme units of stem and suffix. This representation maintains a full description of the morphological properties of subwords while minimizing the data sparseness caused by inflection and allomorphy phenomenon in highly-inflected languages.
<<<Stem with Combined Suffix>>>
In this segmentation strategy, each word is segmented into a stem unit and a combined suffix unit. We add “##” behind the stem unit and add “$$” behind the combined suffix unit. We denote this method as SCS. The segmented word can be denoted as two parts of “stem##” and “suffix1suffix2...suffixN$$”. If the original word has no suffix unit, the word is treated as its stem unit. All the following segmentation strategies will follow this rule.
<<</Stem with Combined Suffix>>>
<<<Stem with Singular Suffix>>>
In this segmentation strategy, each word is segmented into a stem unit and a sequence of suffix units. We add “##” behind the stem unit and add “$$” behind each singular suffix unit. We denote this method as SSS. The segmented word can be denoted as a sequence of “stem##”, “suffix1$$”, “suffix2$$” until “suffixN$$”.
<<</Stem with Singular Suffix>>>
<<</Morpheme Segmentation>>>
<<<Byte Pair Encoding (BPE)>>>
BPE BIBREF7 is originally a data compression technique and it is adapted by BIBREF5 for word segmentation and vocabulary reduction by encoding the rare and unknown words as a sequence of subword units, in which the most frequent character sequences are merged iteratively. Frequent character n-grams are eventually merged into a single symbol. This is based on the intuition that various word classes are translatable via smaller units than words. This method making the NMT model capable of open-vocabulary translation, which can generalize to translate and produce new words on the basis of these subword units. The BPE algorithm can be run on the dictionary extracted from a training text, with each word being weighted by its frequency. In this segmentation strategy, we add “@@” behind each no-final subword unit of the segmented word.
<<</Byte Pair Encoding (BPE)>>>
<<<Morphologically Motivated Segmentation>>>
The problem with morpheme segmentation is that the vocabulary of stem units is still very large, which leads to many rare and unknown words at the training time. The problem with BPE is that it do not consider the morpheme boundaries inside words, which might cause a loss of morphological properties and semantic information. Hence, on the analyses of the above popular word segmentation methods, we propose the morphologically motivated segmentation strategy that combines the morpheme segmentation and BPE for further improving the translation performance of NMT.
Compared with the sentence of word surface forms, the corresponding sentence of stem units only contains the structure information without considering morphological information, which can make better generalization over inflectional variants of the same word and reduce data sparseness BIBREF8. Therefore, we learn a BPE model on the stem units in the training corpus rather than the words, and then apply it on the stem unit of each word after morpheme segmentation.
<<<BPE on Stem with Combined Suffix>>>
In this segmentation strategy, firstly we segment each word into a stem unit and a combined suffix unit as SCS. Secondly, we apply BPE on the stem unit. Thirdly, we add “$$” behind the combined suffix unit. If the stem unit is not segmented, we add “##” behind itself. Otherwise, we add “@@” behind each no-final subword of the segmented stem unit. We denote this method as BPE-SCS.
<<</BPE on Stem with Combined Suffix>>>
<<<BPE on Stem with Singular Suffix>>>
In this segmentation strategy, firstly we segment each word into a stem unit and a sequence of suffix units as SSS. Secondly, we apply BPE on the stem unit. Thirdly, we add “$$” behind each singular suffix unit. If the stem unit is not segmented, we add “##” behind itself. Otherwise, we add “@@” behind each no-final subword of the segmented stem unit. We denote this method as BPE-SSS.
<<</BPE on Stem with Singular Suffix>>>
<<</Morphologically Motivated Segmentation>>>
<<</Approach>>>
<<<Experiments>>>
<<<Experimental Setup>>>
<<<Turkish-English Data :>>>
Following BIBREF9, we use the WIT corpus BIBREF10 and SETimes corpus BIBREF11 for model training, and use the newsdev2016 from Workshop on Machine Translation in 2016 (WMT2016) for validation. The test data are newstest2016 and newstest2017.
<<</Turkish-English Data :>>>
<<<Uyghur-Chinese Data :>>>
We use the news data from China Workshop on Machine Translation in 2017 (CWMT2017) for model training, validation and test.
<<</Uyghur-Chinese Data :>>>
<<<Data Preprocessing :>>>
We utilize the Zemberek with a morphological disambiguation tool to segment the Turkish words into morpheme units, and utilize the morphology analysis tool BIBREF12 to segment the Uyghur words into morpheme units. We employ the python toolkits of jieba for Chinese word segmentation. We apply BPE on the target-side words and we set the number of merge operations to 35K for Chinese and 30K for English and we set the maximum sentence length to 150 tokens. The training corpus statistics of Turkish-English and Uyghur-Chinese machine translation tasks are shown in Table 2 and Table 3 respectively.
<<</Data Preprocessing :>>>
<<<Number of Merge Operations :>>>
We set the number of merge operations on the stem units in the consideration of keeping the vocabulary size of BPE, BPE-SCS and BPE-SSS segmentation strategies on the same scale. We will elaborate the number settings for our proposed word segmentation strategies in this section.
In the Turkish-English machine translation task, for the pure BPE strategy, we set the number of merge operations on the words to 35K, set the number of merge operations on the stem units for BPE-SCS strategy to 15K, and set the number of merge operations on the stem units for BPE-SSS strategy to 25K. In the Uyghur-Chinese machine translation task, for the pure BPE strategy, we set the number of merge operations on the words to 38K, set the number of merge operations on the stem units for BPE-SCS strategy to 10K, and set the number of merge operations on the stem units for BPE-SSS strategy to 35K. The detailed training corpus statistics with different segmentation strategies of Turkish and Uyghur are shown in Table 4 and Table 5 respectively.
According to Table 4 and Table 5, we can find that both the Turkish and Uyghur have a very large vocabulary even in the low-resource training corpus. So we propose the morphological word segmentation strategies of BPE-SCS and BPE-SSS that additionally applying BPE on the stem units after morpheme segmentation, which not only consider the morphological properties but also eliminate the rare and unknown words.
<<</Number of Merge Operations :>>>
<<</Experimental Setup>>>
<<<NMT Configuration>>>
We employ the Transformer model BIBREF13 with self-attention mechanism architecture implemented in Sockeye toolkit BIBREF14. Both the encoder and decoder have 6 layers. We set the number of hidden units to 512, the number of heads for self-attention to 8, the source and target word embedding size to 512, and the number of hidden units in feed-forward layers to 2048. We train the NMT model by using the Adam optimizer BIBREF15 with a batch size of 128 sentences, and we shuffle all the training data at each epoch. The label smoothing is set to 0.1. We report the result of averaging the parameters of the 4 best checkpoints on the validation perplexity. Decoding is performed by beam search with beam size of 5. To effectively evaluate the machine translation quality, we report case-sensitive BLEU score with standard tokenization and character n-gram ChrF3 score .
<<</NMT Configuration>>>
<<</Experiments>>>
<<<Results>>>
In this paper, we investigate and compare morpheme segmentation, BPE and our proposed morphological segmentation strategies on the low resource and morphologically-rich agglutinative languages. Experimental results of Turkish-English and Uyghur-Chinese machine translation tasks are shown in Table 6 and Table 7 respectively.
<<</Results>>>
<<<Discussion>>>
According to Table 6 and Table 7, we can find that both the BPE-SCS and BPE-SSS strategies outperform morpheme segmentation and the strong baseline of pure BPE method. Especially, the BPE-SSS strategy is better and it achieves significant improvement of up to 1.2 BLEU points on Turkish-English machine translation task and 2.5 BLEU points on Uyghur-Chinese machine translation task. Furthermore, we also find that the translation performance of our proposed segmentation strategy on Turkish-English machine translation task is not obvious than Uyghur-Chinese machine translation task, the probable reasons are: the training corpus of Turkish-English consists of talk and news data while most of the talk data are short informal sentences compared with the news data, which cannot provide more language information for the NMT model. Moreover, the test corpus consists of news data, so due to the data domain is different, the improvement of machine translation quality is limited.
In addition, we estimate how the number of merge operations on the stem units for BPE-SSS strategy effects the machine translation quality. Experimental results are shown in Table 8 and Table 9. We find that the number of 25K for Turkish, 30K and 35K for Uyghur maximizes the translation performance. The probable reason is that these numbers of merge operations are able to generate a more appropriate vocabulary that containing effective morpheme units and moderate subword units, which makes better generalization over the morphologically-rich words.
<<</Discussion>>>
<<<Related Work>>>
The NMT system is typically trained with a limited vocabulary, which creates bottleneck on translation accuracy and generalization capability. Many word segmentation methods have been proposed to cope with the above problems, which consider the morphological properties of different languages.
Bradbury and Socher BIBREF16 employed the modified Morfessor to provide morphology knowledge into word segmentation, but they neglected the morphological varieties between subword units, which might result in ambiguous translation results. Sanchez-Cartagena and Toral BIBREF17 proposed a rule-based morphological word segmentation for Finnish, which applies BPE on all the morpheme units uniformly without distinguishing their inner morphological roles. Huck BIBREF18 explored target-side segmentation method for German, which shows that the cascading of suffix splitting and compound splitting with BPE can achieve better translation results. Ataman et al. BIBREF19 presented a linguistically motivated vocabulary reduction approach for Turkish, which optimizes the segmentation complexity with constraint on the vocabulary based on a category-based hidden markov model (HMM). Our work is closely related to their idea while ours are more simple and realizable. Tawfik et al. BIBREF20 confirmed that there is some advantage from using a high accuracy dialectal segmenter jointly with a language independent word segmentation method like BPE. The main difference is that their approach needs sufficient monolingual data additionally to train a segmentation model while ours do not need any external resources, which is very convenient for word segmentation on the low-resource and morphologically-rich agglutinative languages.
<<</Related Work>>>
<<<Conclusion>>>
In this paper, we investigate morphological segmentation strategies on the low-resource and morphologically-rich languages of Turkish and Uyghur. Experimental results show that our proposed morphologically motivated word segmentation method is better suitable for NMT. And the BPE-SSS strategy achieves the best machine translation performance, as it can better preserve the syntactic and semantic information of the words with complex morphology as well as reduce the vocabulary size for model training. Moreover, we also estimate how the number of merge operations on the stem units for BPE-SSS strategy effects the translation quality, and we find that an appropriate vocabulary size is more useful for the NMT model.
In future work, we are planning to incorporate more linguistic and morphology knowledge into the training process of NMT to enhance its capacity of capturing syntactic structure and semantic information on the low-resource and morphologically-rich languages.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nApproach\nMorpheme Segmentation\nStem with Combined Suffix\nStem with Singular Suffix\nByte Pair Encoding (BPE)\nMorphologically Motivated Segmentation\nBPE on Stem with Combined Suffix\nBPE on Stem with Singular Suffix\nExperiments\nExperimental Setup\nTurkish-English Data :\nUyghur-Chinese Data :\nData Preprocessing :\nNumber of Merge Operations :\nNMT Configuration\nResults\nDiscussion\nRelated Work\nConclusion"
],
"type": "outline"
}
|
1910.05456
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Acquisition of Inflectional Morphology in Artificial Neural Networks With Prior Knowledge
<<<Abstract>>>
How does knowledge of one language's morphology influence learning of inflection rules in a second one? In order to investigate this question in artificial neural network models, we perform experiments with a sequence-to-sequence architecture, which we train on different combinations of eight source and three target languages. A detailed analysis of the model outputs suggests the following conclusions: (i) if source and target language are closely related, acquisition of the target language's inflectional morphology constitutes an easier task for the model; (ii) knowledge of a prefixing (resp. suffixing) language makes acquisition of a suffixing (resp. prefixing) language's morphology more challenging; and (iii) surprisingly, a source language which exhibits an agglutinative morphology simplifies learning of a second language's inflectional morphology, independent of their relatedness.
<<</Abstract>>>
<<<Introduction>>>
A widely agreed-on fact in language acquisition research is that learning of a second language (L2) is influenced by a learner's native language (L1) BIBREF0, BIBREF1. A language's morphosyntax seems to be no exception to this rule BIBREF2, but the exact nature of this influence remains unknown. For instance, it is unclear whether it is constraints imposed by the phonological or by the morphosyntactic attributes of the L1 that are more important during the process of learning an L2's morphosyntax.
Within the area of natural language processing (NLP) research, experimenting on neural network models just as if they were human subjects has recently been gaining popularity BIBREF3, BIBREF4, BIBREF5. Often, so-called probing tasks are used, which require a specific subset of linguistic knowledge and can, thus, be leveraged for qualitative evaluation. The goal is to answer the question: What do neural networks learn that helps them to succeed in a given task?
Neural network models, and specifically sequence-to-sequence models, have pushed the state of the art for morphological inflection – the task of learning a mapping from lemmata to their inflected forms – in the last years BIBREF6. Thus, in this work, we experiment on such models, asking not what they learn, but, motivated by the respective research on human subjects, the related question of how what they learn depends on their prior knowledge. We manually investigate the errors made by artificial neural networks for morphological inflection in a target language after pretraining on different source languages. We aim at finding answers to two main questions: (i) Do errors systematically differ between source languages? (ii) Do these differences seem explainable, given the properties of the source and target languages? In other words, we are interested in exploring if and how L2 acquisition of morphological inflection depends on the L1, i.e., the "native language", in neural network models.
To this goal, we select a diverse set of eight source languages from different language families – Basque, French, German, Hungarian, Italian, Navajo, Turkish, and Quechua – and three target languages – English, Spanish and Zulu. We pretrain a neural sequence-to-sequence architecture on each of the source languages and then fine-tune the resulting models on small datasets in each of the target languages. Analyzing the errors made by the systems, we find that (i) source and target language being closely related simplifies the successful learning of inflection in the target language, (ii) the task is harder to learn in a prefixing language if the source language is suffixing – as well as the other way around, and (iii) a source language which exhibits an agglutinative morphology simplifies learning of a second language's inflectional morphology.
<<</Introduction>>>
<<<Task>>>
Many of the world's languages exhibit rich inflectional morphology: the surface form of an individual lexical entry changes in order to express properties such as person, grammatical gender, or case. The citation form of a lexical entry is referred to as the lemma. The set of all possible surface forms or inflections of a lemma is called its paradigm. Each inflection within a paradigm can be associated with a tag, i.e., 3rdSgPres is the morphological tag associated with the inflection dances of the English lemma dance. We display the paradigms of dance and eat in Table TABREF1.
The presence of rich inflectional morphology is problematic for NLP systems as it increases word form sparsity. For instance, while English verbs can have up to 5 inflected forms, Archi verbs have thousands BIBREF7, even by a conservative count. Thus, an important task in the area of morphology is morphological inflection BIBREF8, BIBREF9, which consists of mapping a lemma to an indicated inflected form. An (irregular) English example would be
with PAST being the target tag, denoting the past tense form. Additionally, a rich inflectional morphology is also challenging for L2 language learners, since both rules and their exceptions need to be memorized.
In NLP, morphological inflection has recently frequently been cast as a sequence-to-sequence problem, where the sequence of target (sub-)tags together with the sequence of input characters constitute the input sequence, and the characters of the inflected word form the output. Neural models define the state of the art for the task and obtain high accuracy if an abundance of training data is available. Here, we focus on learning of inflection from limited data if information about another language's morphology is already known. We, thus, loosely simulate an L2 learning setting.
<<<Formal definition.>>>
Let ${\cal M}$ be the paradigm slots which are being expressed in a language, and $w$ a lemma in that language. We then define the paradigm $\pi $ of $w$ as:
$f_k[w]$ denotes an inflected form corresponding to tag $t_{k}$, and $w$ and $f_k[w]$ are strings consisting of letters from an alphabet $\Sigma $.
The task of morphological inflection consists of predicting a missing form $f_i[w]$ from a paradigm, given the lemma $w$ together with the tag $t_i$.
<<</Formal definition.>>>
<<</Task>>>
<<<Model>>>
<<<Pointer–Generator Network>>>
The models we experiment with are based on a pointer–generator network architecture BIBREF10, BIBREF11, i.e., a recurrent neural network (RNN)-based sequence-to-sequence network with attention and a copy mechanism. A standard sequence-to-sequence model BIBREF12 has been shown to perform well for morphological inflection BIBREF13 and has, thus, been subject to cognitively motivated experiments BIBREF14 before. Here, however, we choose the pointer–generator variant of sharma-katrapati-sharma:2018:K18-30, since it performs better in low-resource settings, which we will assume for our target languages. We will explain the model shortly in the following and refer the reader to the original paper for more details.
<<<Encoders.>>>
Our architecture employs two separate encoders, which are both bi-directional long short-term memory (LSTM) networks BIBREF15: The first processes the morphological tags which describe the desired target form one by one. The second encodes the sequence of characters of the input word.
<<</Encoders.>>>
<<<Attention.>>>
Two separate attention mechanisms are used: one per encoder LSTM. Taking all respective encoder hidden states as well as the current decoder hidden state as input, each of them outputs a so-called context vector, which is a weighted sum of all encoder hidden states. The concatenation of the two individual context vectors results in the final context vector $c_t$, which is the input to the decoder at time step $t$.
<<</Attention.>>>
<<<Decoder.>>>
Our decoder consists of a uni-directional LSTM. Unlike a standard sequence-to-sequence model, a pointer–generator network is not limited to generating characters from the vocabulary to produce the output. Instead, the model gives certain probability to copying elements from the input over to the output. The probability of a character $y_t$ at time step $t$ is computed as a sum of the probability of $y_t$ given by the decoder and the probability of copying $y_t$, weighted by the probabilities of generating and copying:
$p_{\textrm {dec}}(y_t)$ is calculated as an LSTM update and a projection of the decoder state to the vocabulary, followed by a softmax function. $p_{\textrm {copy}}(y_t)$ corresponds to the attention weights for each input character. The model computes the probability $\alpha $ with which it generates a new output character as
for context vector $c_t$, decoder state $s_t$, embedding of the last output $y_{t-1}$, weights $w_c$, $w_s$, $w_y$, and bias vector $b$. It has been shown empirically that the copy mechanism of the pointer–generator network architecture is beneficial for morphological generation in the low-resource setting BIBREF16.
<<</Decoder.>>>
<<</Pointer–Generator Network>>>
<<<Pretraining and Finetuning>>>
Pretraining and successive fine-tuning of neural network models is a common approach for handling of low-resource settings in NLP. The idea is that certain properties of language can be learned either from raw text, related tasks, or related languages. Technically, pretraining consists of estimating some or all model parameters on examples which do not necessarily belong to the final target task. Fine-tuning refers to continuing training of such a model on a target task, whose data is often limited. While the sizes of the pretrained model parameters usually remain the same between the two phases, the learning rate or other details of the training regime, e.g., dropout, might differ. Pretraining can be seen as finding a suitable initialization of model parameters, before training on limited amounts of task- or language-specific examples.
In the context of morphological generation, pretraining in combination with fine-tuning has been used by kann-schutze-2018-neural, which proposes to pretrain a model on general inflection data and fine-tune on examples from a specific paradigm whose remaining forms should be automatically generated. Famous examples for pretraining in the wider area of NLP include BERT BIBREF17 or GPT-2 BIBREF18: there, general properties of language are learned using large unlabeled corpora.
Here, we are interested in pretraining as a simulation of familiarity with a native language. By investigating a fine-tuned model we ask the question: How does extensive knowledge of one language influence the acquisition of another?
<<</Pretraining and Finetuning>>>
<<</Model>>>
<<<Experimental Design>>>
<<<Target Languages>>>
We choose three target languages.
English (ENG) is a morphologically impoverished language, as far as inflectional morphology is concerned. Its verbal paradigm only consists of up to 5 different forms and its nominal paradigm of only up to 2. However, it is one of the most frequently spoken and taught languages in the world, making its acquisition a crucial research topic.
Spanish (SPA), in contrast, is morphologically rich, and disposes of much larger verbal paradigms than English. Like English, it is a suffixing language, and it additionally makes use of internal stem changes (e.g., o $\rightarrow $ ue).
Since English and Spanish are both Indo-European languages, and, thus, relatively similar, we further add a third, unrelated target language. We choose Zulu (ZUL), a Bantoid language. In contrast to the first two, it is strongly prefixing.
<<</Target Languages>>>
<<<Source Languages>>>
For pretraining, we choose languages with different degrees of relatedness and varying morphological similarity to English, Spanish, and Zulu. We limit our experiments to languages which are written in Latin script.
As an estimate for morphological similarity we look at the features from the Morphology category mentioned in The World Atlas of Language Structures (WALS). An overview of the available features as well as the respective values for our set of languages is shown in Table TABREF13.
We decide on Basque (EUS), French (FRA), German (DEU), Hungarian (HUN), Italian (ITA), Navajo (NAV), Turkish (TUR), and Quechua (QVH) as source languages.
Basque is a language isolate. Its inflectional morphology makes similarly frequent use of prefixes and suffixes, with suffixes mostly being attached to nouns, while prefixes and suffixes can both be employed for verbal inflection.
French and Italian are Romance languages, and thus belong to the same family as the target language Spanish. Both are suffixing and fusional languages.
German, like English, belongs to the Germanic language family. It is a fusional, predominantly suffixing language and, similarly to Spanish, makes use of stem changes.
Hungarian, a Finno-Ugric language, and Turkish, a Turkic language, both exhibit an agglutinative morphology, and are predominantly suffixing. They further have vowel harmony systems.
Navajo is an Athabaskan language and the only source language which is strongly prefixing. It further exhibits consonant harmony among its sibilants BIBREF19, BIBREF20.
Finally, Quechua, a Quechuan language spoken in South America, is again predominantly suffixing and unrelated to all of our target languages.
<<</Source Languages>>>
<<<Hyperparameters and Data>>>
We mostly use the default hyperparameters by sharma-katrapati-sharma:2018:K18-30. In particular, all RNNs have one hidden layer of size 100, and all input and output embeddings are 300-dimensional.
For optimization, we use ADAM BIBREF21. Pretraining on the source language is done for exactly 50 epochs. To obtain our final models, we then fine-tune different copies of each pretrained model for 300 additional epochs for each target language. We employ dropout BIBREF22 with a coefficient of 0.3 for pretraining and, since that dataset is smaller, with a coefficient of 0.5 for fine-tuning.
We make use of the datasets from the CoNLL–SIGMORPHON 2018 shared task BIBREF9. The organizers provided a low, medium, and high setting for each language, with 100, 1000, and 10000 examples, respectively. For all L1 languages, we train our models on the high-resource datasets with 10000 examples. For fine-tuning, we use the low-resource datasets.
<<</Hyperparameters and Data>>>
<<</Experimental Design>>>
<<<Quantitative Results>>>
In Table TABREF18, we show the final test accuracy for all models and languages. Pretraining on EUS and NAV results in the weakest target language inflection models for ENG, which might be explained by those two languages being unrelated to ENG and making at least partial use of prefixing, while ENG is a suffixing language (cf. Table TABREF13). In contrast, HUN and ITA yield the best final models for ENG. This is surprising, since DEU is the language in our experiments which is closest related to ENG.
For SPA, again HUN performs best, followed closely by ITA. While the good performance of HUN as a source language is still unexpected, ITA is closely related to SPA, which could explain the high accuracy of the final model. As for ENG, pretraining on EUS and NAV yields the worst final models – importantly, accuracy is over $15\%$ lower than for QVH, which is also an unrelated language. This again suggests that the prefixing morphology of EUS and NAV might play a role.
Lastly, for ZUL, all models perform rather poorly, with a minimum accuracy of 10.7 and 10.8 for the source languages QVH and EUS, respectively, and a maximum accuracy of 24.9 for a model pretrained on Turkish. The latter result hints at the fact that a regular and agglutinative morphology might be beneficial in a source language – something which could also account for the performance of models pretrained on HUN.
<<</Quantitative Results>>>
<<<Qualitative Results>>>
For our qualitative analysis, we make use of the validation set. Therefore, we show validation set accuracies in Table TABREF19 for comparison. As we can see, the results are similar to the test set results for all language combinations. We manually annotate the outputs for the first 75 development examples for each source–target language combination. All found errors are categorized as belonging to one of the following categories.
<<<Stem Errors>>>
SUB(X): This error consists of a wrong substitution of one character with another. SUB(V) and SUB(C) denote this happening with a vowel or a consonant, respectively. Letters that differ from each other by an accent count as different vowels.
Example: decultared instead of decultured
DEL(X): This happens when the system ommits a letter from the output. DEL(V) and DEL(C) refer to a missing vowel or consonant, respectively.
Example: firte instead of firtle
NO_CHG(X): This error occurs when inflecting the lemma to the gold form requires a change of either a vowel (NO_CHG(V)) or a consonant (NO_CHG(C)), but this is missing in the predicted form.
Example: verto instead of vierto
MULT: This describes cases where two or more errors occur in the stem. Errors concerning the affix are counted for separately.
Example: aconcoonaste instead of acondicionaste
ADD(X): This error occurs when a letter is mistakenly added to the inflected form. ADD(V) refers to an unnecessary vowel, ADD(C) refers to an unnecessary consonant.
Example: compillan instead of compilan
CHG2E(X): This error occurs when inflecting the lemma to the gold form requires a change of either a vowel (CHG2E(V)) or a consonant (CHG2E(C)), and this is done, but the resulting vowel or consonant is incorrect.
Example: propace instead of propague
<<</Stem Errors>>>
<<<Affix Errors>>>
AFF: This error refers to a wrong affix. This can be either a prefix or a suffix, depending on the correct target form.
Example: ezoJulayi instead of esikaJulayi
CUT: This consists of cutting too much of the lemma's prefix or suffix before attaching the inflected form's prefix or suffix, respectively.
Example: irradiseis instead of irradiaseis
<<</Affix Errors>>>
<<<Miscellaneous Errors>>>
REFL: This happens when a reflective pronoun is missing in the generated form.
Example: doliéramos instead of nos doliéramos
REFL_LOC: This error occurs if the reflective pronouns appears at an unexpected position within the generated form.
Example: taparsebais instead of os tapabais
OVERREG: Overregularization errors occur when the model predicts a form which would be correct if the lemma's inflections were regular but they are not.
Example: underteach instead of undertaught
<<</Miscellaneous Errors>>>
<<<Error Analysis: English>>>
Table TABREF35 displays the errors found in the 75 first ENG development examples, for each source language. From Table TABREF19, we know that HUN $>$ ITA $>$ TUR $>$ DEU $>$ FRA $>$ QVH $>$ NAV $>$ EUS, and we get a similar picture when analyzing the first examples. Thus, especially keeping HUN and TUR in mind, we cautiously propose a first conclusion: familiarity with languages which exhibit an agglutinative morphology simplifies learning of a new language's morphology.
Looking at the types of errors, we find that EUS and NAV make the most stem errors. For QVH we find less, but still over 10 more than for the remaining languages. This makes it seem that models pretrained on prefixing or partly prefixing languages indeed have a harder time to learn ENG inflectional morphology, and, in particular, to copy the stem correctly. Thus, our second hypotheses is that familiarity with a prefixing language might lead to suspicion of needed changes to the part of the stem which should remain unaltered in a suffixing language. DEL(X) and ADD(X) errors are particularly frequent for EUS and NAV, which further suggests this conclusion.
Next, the relatively large amount of stem errors for QVH leads to our second hypothesis: language relatedness does play a role when trying to produce a correct stem of an inflected form. This is also implied by the number of MULT errors for EUS, NAV and QVH, as compared to the other languages.
Considering errors related to the affixes which have to be generated, we find that DEU, HUN and ITA make the fewest. This further suggests the conclusion that, especially since DEU is the language which is closest related to ENG, language relatedness plays a role for producing suffixes of inflected forms as well.
Our last observation is that many errors are not found at all in our data sample, e.g., CHG2E(X) or NO_CHG(C). This can be explained by ENG having a relatively poor inflectional morphology, which does not leave much room for mistakes.
<<</Error Analysis: English>>>
<<<Error Analysis: Spanish>>>
The errors committed for SPA are shown in Table TABREF37, again listed by source language. Together with Table TABREF19 it gets clear that SPA inflectional morphology is more complex than that of ENG: systems for all source languages perform worse.
Similarly to ENG, however, we find that most stem errors happen for the source languages EUS and NAV, which is further evidence for our previous hypothesis that familiarity with prefixing languages impedes acquisition of a suffixing one. Especially MULT errors are much more frequent for EUS and NAV than for all other languages. ADD(X) happens a lot for EUS, while ADD(C) is also frequent for NAV. Models pretrained on either language have difficulties with vowel changes, which reflects in NO_CHG(V). Thus, we conclude that this phenomenon is generally hard to learn.
Analyzing next the errors concerning affixes, we find that models pretrained on HUN, ITA, DEU, and FRA (in that order) commit the fewest errors. This supports two of our previous hypotheses: First, given that ITA and FRA are both from the same language family as SPA, relatedness seems to be benficial for learning of the second language. Second, the system pretrained on HUN performing well suggests again that a source language with an agglutinative, as opposed to a fusional, morphology seems to be beneficial as well.
<<</Error Analysis: Spanish>>>
<<<Error Analysis: Zulu>>>
In Table TABREF39, the errors for Zulu are shown, and Table TABREF19 reveals the relative performance for different source languages: TUR $>$ HUN $>$ DEU $>$ ITA $>$ FRA $>$ NAV $>$ EUS $>$ QVH. Again, TUR and HUN obtain high accuracy, which is an additional indicator for our hypothesis that a source language with an agglutinative morphology facilitates learning of inflection in another language.
Besides that, results differ from those for ENG and SPA. First of all, more mistakes are made for all source languages. However, there are also several finer differences. For ZUL, the model pretrained on QVH makes the most stem errors, in particular 4 more than the EUS model, which comes second. Given that ZUL is a prefixing language and QVH is suffixing, this relative order seems important. QVH also committs the highest number of MULT errors.
The next big difference between the results for ZUL and those for ENG and SPA is that DEL(X) and ADD(X) errors, which previously have mostly been found for the prefixing or partially prefixing languages EUS and NAV, are now most present in the outputs of suffixing languages. Namely, DEL(C) occurs most for FRA and ITA, DEL(V) for FRA and QVH, and ADD(C) and ADD(V) for HUN. While some deletion and insertion errors are subsumed in MULT, this does not fully explain this difference. For instance, QVH has both the second most DEL(V) and the most MULT errors.
The overall number of errors related to the affix seems comparable between models with different source languages. This weakly supports the hypothesis that relatedness reduces affix-related errors, since none of the pretraining languages in our experiments is particularly close to ZUL. However, we do find more CUT errors for HUN and TUR: again, these are suffixing, while CUT for the target language SPA mostly happened for the prefixing languages EUS and NAV.
<<</Error Analysis: Zulu>>>
<<<Limitations>>>
A limitation of our work is that we only include languages that are written in Latin script. An interesting question for future work might, thus, regard the effect of disjoint L1 and L2 alphabets.
Furthermore, none of the languages included in our study exhibits a templatic morphology. We make this choice because data for templatic languages is currently mostly available in non-Latin alphabets. Future work could investigate languages with templatic morphology as source or target languages, if needed by mapping the language's alphabet to Latin characters.
Finally, while we intend to choose a diverse set of languages for this study, our overall number of languages is still rather small. This affects the generalizability of the results, and future work might want to look at larger samples of languages.
<<</Limitations>>>
<<</Qualitative Results>>>
<<<Related Work>>>
<<<Neural network models for inflection.>>>
Most research on inflectional morphology in NLP within the last years has been related to the SIGMORPHON and CoNLL–SIGMORPHON shared tasks on morphological inflection, which have been organized yearly since 2016 BIBREF6. Traditionally being focused on individual languages, the 2019 edition BIBREF23 contained a task which asked for transfer learning from a high-resource to a low-resource language. However, source–target pairs were predefined, and the question of how the source language influences learning besides the final accuracy score was not considered. Similarly to us, kyle performed a manual error analysis of morphological inflection systems for multiple languages. However, they did not investigate transfer learning, but focused on monolingual models.
Outside the scope of the shared tasks, kann-etal-2017-one investigated cross-lingual transfer for morphological inflection, but was limited to a quantitative analysis. Furthermore, that work experimented with a standard sequence-to-sequence model BIBREF12 in a multi-task training fashion BIBREF24, while we pretrain and fine-tune pointer–generator networks. jin-kann-2017-exploring also investigated cross-lingual transfer in neural sequence-to-sequence models for morphological inflection. However, their experimental setup mimicked kann-etal-2017-one, and the main research questions were different: While jin-kann-2017-exploring asked how cross-lingual knowledge transfer works during multi-task training of neural sequence-to-sequence models on two languages, we investigate if neural inflection models demonstrate interesting differences in production errors depending on the pretraining language. Besides that, we differ in the artificial neural network architecture and language pairs we investigate.
<<</Neural network models for inflection.>>>
<<<Cross-lingual transfer in NLP.>>>
Cross-lingual transfer learning has been used for a large variety NLP of tasks, e.g., automatic speech recognition BIBREF25, entity recognition BIBREF26, language modeling BIBREF27, or parsing BIBREF28, BIBREF29, BIBREF30. Machine translation has been no exception BIBREF31, BIBREF32, BIBREF33. Recent research asked how to automatically select a suitable source language for a given target language BIBREF34. This is similar to our work in that our findings could potentially be leveraged to find good source languages.
<<</Cross-lingual transfer in NLP.>>>
<<<Acquisition of morphological inflection.>>>
Finally, a lot of research has focused on human L1 and L2 acquisition of inflectional morphology BIBREF35, BIBREF36, BIBREF37, BIBREF38, BIBREF39, BIBREF40.
To name some specific examples, marques2011study investigated the effect of a stay abroad on Spanish L2 acquisition, including learning of its verbal morphology in English speakers. jia2003acquisition studied how Mandarin Chinese-speaking children learned the English plural morpheme. nicoladis2012young studied the English past tense acquisition in Chinese–English and French–English bilingual children. They found that, while both groups showed similar production accuracy, they differed slightly in the type of errors they made. Also considering the effect of the native language explicitly, yang2004impact investigated the acquisition of the tense-aspect system in an L2 for speakers of a native language which does not mark tense explicitly.
Finally, our work has been weakly motivated by bliss2006l2. There, the author asked a question for human subjects which is similar to the one we ask for neural models: How does the native language influence L2 acquisition of inflectional morphology?
<<</Acquisition of morphological inflection.>>>
<<</Related Work>>>
<<<Conclusion and Future Work>>>
Motivated by the fact that, in humans, learning of a second language is influenced by a learner's native language, we investigated a similar question in artificial neural network models for morphological inflection: How does pretraining on different languages influence a model's learning of inflection in a target language?
We performed experiments on eight different source languages and three different target languages. An extensive error analysis of all final models showed that (i) for closely related source and target languages, acquisition of target language inflection gets easier; (ii) knowledge of a prefixing language makes learning of inflection in a suffixing language more challenging, as well as the other way around; and (iii) languages which exhibit an agglutinative morphology facilitate learning of inflection in a second language.
Future work might leverage those findings to improve neural network models for morphological inflection in low-resource languages, by choosing suitable source languages for pretraining.
Another interesting next step would be to investigate how the errors made by our models compare to those by human L2 learners with different native languages. If the exhibited patterns resemble each other, computational models could be used to predict errors a person will make, which, in turn, could be leveraged for further research or the development of educational material.
<<</Conclusion and Future Work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nTask\nFormal definition.\nModel\nPointer–Generator Network\nEncoders.\nAttention.\nDecoder.\nPretraining and Finetuning\nExperimental Design\nTarget Languages\nSource Languages\nHyperparameters and Data\nQuantitative Results\nQualitative Results\nStem Errors\nAffix Errors\nMiscellaneous Errors\nError Analysis: English\nError Analysis: Spanish\nError Analysis: Zulu\nLimitations\nRelated Work\nNeural network models for inflection.\nCross-lingual transfer in NLP.\nAcquisition of morphological inflection.\nConclusion and Future Work"
],
"type": "outline"
}
|
1909.04625
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Representation of Constituents in Neural Language Models: Coordination Phrase as a Case Study
<<<Abstract>>>
Neural language models have achieved state-of-the-art performances on many NLP tasks, and recently have been shown to learn a number of hierarchically-sensitive syntactic dependencies between individual words. However, equally important for language processing is the ability to combine words into phrasal constituents, and use constituent-level features to drive downstream expectations. Here we investigate neural models' ability to represent constituent-level features, using coordinated noun phrases as a case study. We assess whether different neural language models trained on English and French represent phrase-level number and gender features, and use those features to drive downstream expectations. Our results suggest that models use a linear combination of NP constituent number to drive CoordNP/verb number agreement. This behavior is highly regular and even sensitive to local syntactic context, however it differs crucially from observed human behavior. Models have less success with gender agreement. Models trained on large corpora perform best, and there is no obvious advantage for models trained using explicit syntactic supervision.
<<</Abstract>>>
<<<Introduction>>>
Humans deploy structure-sensitive expectations to guide processing during natural language comprehension BIBREF0. While it has been shown that neural language models show similar structure-sensitivity in their predictions about upcoming material BIBREF1, BIBREF2, previous work has focused on dependencies that are conditioned by features attached to a single word, such as subject number BIBREF3, BIBREF4 or wh-question words BIBREF5. There has been no systematic investigation into models' ability to compute phrase-level features—features that are attached to a set of words—and whether models can deploy these more abstract properties to drive downstream expectations.
In this work, we assess whether state-of-the-art neural models can compute and employ phrase-level gender and number features of coordinated subject Noun Phrases (CoordNPs) with two nouns. Typical syntactic phrases are endocentric: they are headed by a single child, whose features determine the agreement requirements for the entire phrase. In Figure FIGREF1, for example, the word star heads the subject NP The star; since star is singular, the verb must be singular. CoordNPs lack endocentricity: neither conjunct NP solely determines the features of the NP as a whole. Instead, these feature values are determined by compositional rules sensitive to the features of the conjuncts and the identity of the coordinator. In Figure FIGREF1, because the coordinator is and, the subject NP number is plural even though both conjuncts (the star and the moon) are singular. As this case demonstrates, the agreement behavior for CoordNPs must be driven by more abstract, constituent-level representations, and cannot be reduced to features hosted on a single lexical item.
We use four suites of experiments to assess whether neural models are able to build up phrase-level representations of CoordNPs on the fly and deploy them to drive humanlike behavior. First, we present a simple control experiment to show that models can represent number and gender features of non-coordinate NPs (Non-coordination Agreement). Second, we show that models modulate their expectations for downstream verb number based on the CoordNP's coordinating conjunction combined with the features of the coordinated nouns (Simple Coordination). We rule out the possibility that models are using simple heuristics by designing a set of stimuli where a simple heuristic would fail due to structural ambiguity (Complex Coordination). The striking success for all models in this experiment indicates that even neural models with no explicit hierarchical bias, trained on a relatively small amount of text are able to learn fine-grained and robust generalizations about the interaction between CoordNPs and local syntactic context. Finally, we use subject–auxiliary inversion to test whether an upstream lexical item modulates model expectation for the phrasal-level features of a downstream CoordNP (Inverted Coordination). Here, we find that all models are insensitive to the fine-grained features of this particular syntactic context. Overall, our results indicate that neural models can learn fine-grained information about the interaction of Coordinated NPs and local syntactic context, but their behavior remains unhumanlike in many key respects.
<<</Introduction>>>
<<<Methods>>>
<<<Psycholinguistics Paradigm>>>
To determine whether state-of-the-art neural architectures are capable of learning humanlike CoordNP/verb agreement properties, we adopt the psycholinguistics paradigm for model assessment. In this paradigm the models are tested using hand-crafted sentences designed to test underlying network knowledge. The assumption here is that if a model implicitly learns humanlike linguistic knowledge during training, its expectations for upcoming words should qualitatively match human expectations in novel contexts. For example, BIBREF1 and BIBREF6 assessed how well neural models had learned the subject/verb number agreement by feeding them with the prefix The keys to the cabinet .... If the models predicted the grammatical continuation are over the ungrammatical continuation is, they can be said to have learned the number agreement insofar as the number of the head noun and not the number of the distractor noun, cabinet, drives expectations about the number of the matrix verb.
If models are able to robustly modulate their expectations based on the internal components of the CoordNP, this will provide evidence that the networks are building up a context-sensitive phrase-level representation. We quantify model expectations as surprisal values. Surprisal is the negative log-conditional probability $S(x_i) = -\log _2 p(x_i|x_1 \dots x_{i-1})$ of a sentence's $i^{th}$ word $x_i$ given the previous words. Surprisal tells us how strongly $x_i$ is expected in context and is known to correlate with human processing difficulty BIBREF7, BIBREF0, BIBREF8. In the CoordNP/Verb agreement studies presented here, cases where the proceeding context sets high expectation for a number-inflected verb form $w_i$, (e.g. singular `is') we would expect $S(w_i)$ to be lower than its number-mismatched counterpart (e.g. plural `are').
<<</Psycholinguistics Paradigm>>>
<<<Models Tested>>>
<<<Recurrent Neural Network (RNN) Language Models>>>
are trained to output the probability distribution of the upcoming word given a context, without explicitly representing the structure of the context BIBREF9, BIBREF10. We trained two two-layer recurrent neural language models with long short-term memory architecture BIBREF11 on a relatively small corpus. The first model, referred as `LSTM (PTB)' in the following sections, was trained on the sentences from Penn Treebank BIBREF12. The second model, referred as `LSTM (FTB)', was trained on the sentences from French Treebank BIBREF13. We set the size of input word embedding and LSTM hidden layer of both models as 256.
We also compare LSTM language models trained on large corpora. We incorporate two pretrained English language models: one trained on the Billion Word benchmark (referred as `LSTM (1B)') from BIBREF14, and the other trained on English Wikipedia (referred as `LSTM (enWiki)') from BIBREF3. For French, we trained a large LSTM language model (referred as `LSTM (frWaC)') on a random subset (about 4 million sentences, 138 million word tokens) of the frWaC dataset BIBREF15. We set the size of the input embeddings and hidden layers to 400 for the LSTM (frWaC) model since it is trained on a large dataset.
<<</Recurrent Neural Network (RNN) Language Models>>>
<<<ActionLSTM>>>
models the linearized bracketed tree structure of a sentence by learning to predict the next action required to construct a phrase-structure parse BIBREF16. The action space consists of three possibilities: open a new non-terminal node and opening bracket; generate a terminal node; and close a bracket. To compute surprisal values for a given token, we approximate $P(w_i|w_{1\cdots i-1)}$ by marginalizing over the most-likely partial parses found by word-synchronous beam search BIBREF17.
<<</ActionLSTM>>>
<<<Generative Recurrent Neural Network Grammars (RNNG)>>>
jointly model the word sequence as well as the underlying syntactic structure BIBREF18. Following BIBREF19, we estimate surprisal using word-synchronous beam search BIBREF17. We use the same hyper-parameter settings as BIBREF18.
The annotation schemes used to train the syntactically-supervised models differ slightly between French and English. In the PTB (English) CoordNPs are flat structures bearing an `NP' label. In FTB (French), CoordNPs are binary-branching, labeled as NPs, except for the phrasal node dominating the coordinating conjunction, which is labeled `COORD'. We examine the effects of annotation schemes on model performance in Appendix SECREF8.
<<</Generative Recurrent Neural Network Grammars (RNNG)>>>
<<</Models Tested>>>
<<</Methods>>>
<<<Experiment 1: Non-coordination Agreement>>>
In order to provide a baseline for following experiments, here we assess whether the models tested have learned basic representations of number and gender features for non-coordinated Noun Phrases. We test number agreement in English and French as well as gender agreement in French. Both English and French have two grammatical number feature: singular (sg) and plural (pl). French has two grammatical gender features: masculine (m) and feminine (f).
The experimental materials include sentences where the subject NPs contain a single noun which can either match with the matrix verb (in the case of number agreement) or a following predicative adjective (in the case of gender agreement). Conditions are given in Table TABREF9 and Table TABREF10. We measure model behavior by computing the plural expectation, or the surprisal of the singular continuation minus the surprisal of the plural continuation for each condition and took the average for each condition. We expect a positive plural expectation in the Npl conditions and a negative plural expectation in the Nsg conditions. For gender expectation we compute a gender expectation, which is S(feminine continuation) $-$ S(masculine continuation). We measure surprisal at the verbs and predicative adjectives themselves.
The results for this experiment are in Figure FIGREF11, with the plural expectation and gender expectation on the y-axis and conditions on the x-axis. For this and subsequent experiments error bars represent 95% confidence intervals for across-item means. For number agreement, all the models in English and French show positive plural expectation when the head noun is plural and negative plural expectation when it is singular. For gender agreement, however, only the LSTM (frWaC) shows modulation of gender expectation based on the gender of the head noun. This is most likely due to the lower frequency of predicative adjectives compared to matrix verbs in the corpus.
<<</Experiment 1: Non-coordination Agreement>>>
<<<Experiment 2: Simple Coordination>>>
In this section, we test whether neural language models can use grammatical features hosted on multiple components of a coordination phrase—the coordinated nouns as well as the coordinating conjunction—to drive downstream expectations. We test number agreement in both English and French and gender agreement in French.
<<<Number Agreement>>>
In simple subject/verb number agreement, the number features of the CoordNP are determined by the coordinating conjunction and the number features of the two coordinated NPs. CoordNPs formed by and are plural and thus require plural verbs; CoordNPs formed by or allow either plural or singular verbs, often with the number features of the noun linearly closest to the verb playing a more important role, although this varies cross-linguistically BIBREF20. Forced-choice preference experiments in BIBREF21 reveal that English native speakers prefer singular agreement when the closest conjunct in an or-CoordNP is singular and plural agreement when the closest conjunct is plural. In French, both singular and plural verbs are possible when two singular NPs are joined via disjunction BIBREF22.
In order to assess whether the neural models learn the basic CoordNP licensing for English, we adapted 37 items from BIBREF21, along the 16 conditions outlined in Table TABREF14. Test items consist of the sentence preamble, followed by either the singular or plural BE verb, half the time in present tense (is/are) and half the time in past tense (was/were). We measured the plural expectation, following the procedure in Section SECREF3. We created 24 items using the same conditions as the English experiment to test the models trained in French, using the 3rd person singular and plural form of verb aller, `to go' (va, vont). Within each item, nouns match in gender; across all conditions half the nouns are masculine, half feminine.
The results for this experiment can be seen in Figure FIGREF12, with the results for English on the left and French on the right. The results for and are on the top row, or on the bottom row. For all figures the y-axis shows the plural expectation, or the difference in surprisal between the singular condition and the plural condition. Turning first to English-and (Figure FIGREF12), all models show plural expectation (the bars are significantly greater than zero) in the pl_and_pl and sg_and_pl conditions, as expected. For the pl_and_sg condition, only the LSTM (enWiki) and ActionLSTM are greater than zero, indicating humanlike behavior. For the sg_and_sg condition, only the LSTM (enWiki) model shows the correct plural expectation. For the French-and (Figure FIGREF12), all models show positive plural expectation in all conditions, as expected, except for the LSTM (FTB) in the sg_and_sg condition.
Examining the results for English-or, we find that all models demonstrate humanlike expectation in the pl_or_pl and sg_or_pl conditions. The LSTM (1B), LSTM (PTB), and RNNG models show zero or negative singular expectation for the pl_or_sg conditions, as expected. However the LSTM (enWiki) and ActionLSTM models show positive plural expectation in this condition, indicating that they have not learned the humanlike generalizations. All models show significantly negative plural expectation in the sg_or_sg condition, as expected. In the French-or cases, models show almost identical behavior to the and conditions, except the LSTM (frWaC) shows smaller plural expectation when singular nouns are linearly proximal to the verb.
These results indicate moderate success at learning coordinate NP agreement, however this success may be the result of an overly simple heuristic. It appears that expectation for both plural and masculine continuations are driven by a linear combination of the two nominal number/gender features transferred into log-probability space, with the earlier noun mattering less than the later noun. A model that optimally captures human grammatical preferences should show no or only slight difference across conditions in the surprisal differential for the and conditions, and be greater than zero in all cases. Yet, all the models tested show gradient performance based on the number of plural conjuncts.
<<</Number Agreement>>>
<<<Gender Agreement>>>
In French, if two nouns are coordinated with et (and-coordination), agreement must be masculine if there is one masculine element in the coordinate structure. If the nouns are coordinated with ou (or-coordination), both masculine and feminine agreement is acceptable BIBREF23, BIBREF24. Although linear proximity effects have been tested for a number of languages that employ grammatical gender, as in e.g. Slavic languages BIBREF25, there is no systematic study for French.
To assess whether the French neural models learned humanlike gender agreement, we created 24 test items, following the examples in Table TABREF16, and measured the masculine expectation. In our test items, the coordinated subject NP is followed by a predicative adjective, which either takes on masculine or feminine gender morphology.
Results from the experiment can be seen in Figure FIGREF17. No models shows qualitative difference based on the coordinator, and only the LSTM (frWaC) shows significant behavior difference between conditions. Here, we find positive masculine expectation in the m_and_m and f_and_m conditions, and negative masculine expectation in the f_and_f condition, as expected. However, in the m_and_f condition, the masculine expectation is not significantly different from zero, where we would expect it to be positive. In the or-coordination conditions, following our expectation, masculine expectation is positive when both conjuncts are masculine and negative when both are feminine. For the LSTM (FTB) and ActionLSTM models, the masculine expectation is positive (although not significantly so) in all conditions, consistent with results in Section SECREF3.
<<</Gender Agreement>>>
<<</Experiment 2: Simple Coordination>>>
<<<Experiment 3: Complex Coordination>>>
One possible explanation for the results presented in the previous section is that the models are using a `bag of features' approach to plural and masculine licensing that is opaque to syntactic context: Following a coordinating conjunction surrounded by nouns, models simply expect the following verb to be plural, proportionally to the number of plural nouns.
In this section, we control for this potential confound by conducting two experiments: In the Complex Coordination Control experiments we assess models' ability to extend basic CoordNP licensing into sententially-embedded environments, where the CoordNP can serve as an embedded subject. In the Complex Coordination Critical experiments, we leverage the sentential embedding environment to demonstrate that when the CoordNPs cannot plausibly serve as the subject of the embedded phrase, models are able to suppress the previously-demonstrated expectations set up by these phrases. These results demonstrate that models are not following a simple strategy for predicting downstream number and gender features, but are building up CoordNP representations on the fly, conditioned on the local syntactic context.
<<<Complex Coordination Control>>>
Following certain sentential-embedding verbs, CoordNPs serve unambiguously as the subject of the verb's sentence complement and should trigger number agreement behavior in the main verb of the embedded clause, similar to the behavior presented in SECREF13. To assess this, we use the 37 test items in English and 24 items in French in section SECREF13, following the conditions in Table TABREF19 (for number agreement), testing only and coordination. For gender agreement, we use the same test items and conditions for and coordination in Section SECREF15, but with the Coordinated NPs embedded in a context similar to SECREF18. As before, we derived the plural expectation by measuring the difference in surprisal between the singular and plural continuations and the gender expectation by computing the difference in surprisal between the masculine and feminine predicates.
. Je croyais que les prix et les dépenses étaient importants/importantes.
I thought that the.pl price.mpl and the.pl expense.fpl were important.mpl/fpl
I thought that the prices and the expenses were important.
The results for the control experiments can be seen in Figure FIGREF20, with English number agreement on the top row, French number agreement in the middle row and French gender agreement on the bottom. The y-axis shows either plural or masculine expectation, with the various conditions along the x-axis. For English number agreement, we find that the models behave similarly as they do for simple coordination contexts. All models show significant plural expectation when the closest noun is plural, with only two models demonstrating plural expectation in the sg_and_sg case. The French number agreement tests show similar results, with all models except LSTM (FTB) demonstrating significant plural prediction in all cases. Turning to French gender agreement, only the LSTM (frWaC) shows sensitivity to the various conditions, with positive masculine expectation in the m_and_m condition and negative expectation in the f_and_f condition, as expected. These results indicate that the behavior shown in Section SECREF13 extends to more complex syntactic environments—in this case to sentential embeddings. Interestingly, for some models, such as the LSTM (1B), behavior is more humanlike when the CoordNP serves as the subject of an embedded sentence. This may be because the model, which has a large number of hidden states and may be extra sensitive to fine-grained syntactic information carried on lexical items BIBREF2, is using the complementizer, that, to drive more robust expectations.
<<</Complex Coordination Control>>>
<<<Complex Coordination Critical>>>
In order to assess whether the models' strategy for CoordNP/verb number agreement is sensitive to syntactic context, we contrast the results presented above to those from a second, critical experiment. Here, two coordinated nouns follow a verb that cannot take a sentential complement, as in the examples given in Table TABREF23. Of the two possible continuations—are or is—the plural is only grammatically licensed when the second of the two conjuncts is plural. In these cases, the plural continuation may lead to a final sentence where the first noun serves as the verb's object and the second introduces a second main clause coordinated with the first, as in I fixed the doors and the windows are still broken. For the same reason, the singular-verb continuation is only licensed when the noun immediately following and is singular.
We created 37 test items in both English and French, and calculated the plural expectation. If the models were following a simple strategy to drive CoordNP/verb number agreement, then we should see either no difference in plural expectation across the four conditions or behavior no different from the control experiment. If, however, the models are sensitive to the licensing context, we should see a contrast based solely on the number features of the second conjunct, where plural expectation is positive when the second conjunct is plural, and negative otherwise.
Experimental items for a critical gender test were created similarly, as in Example SECREF22. As with plural agreement, gender expectation should be driven solely by the second conjunct: For the f_and_m and m_and_m conditions, the only grammatical continuation is one where the adjectival predicate bears masculine gender morphology. Conversely, for the m_and_f or f_and_f conditions, the only grammatical continuation is one where the adjectival predicate bears feminine morphology. As in SECREF13, we created 24 test items and measured the gender expectation by calculating the difference in surprisal between the masculine and feminine continuations.
. Nous avons accepté les prix et les dépenses étaient importants/importantes.
we have accepted the.pl price.mpl and the expense.fpl were important.mpl/fpl
We have accepted the prices and the expenses were important.
The results from the critical experiments are in Figure FIGREF21, with the English number agreement on the top row, French number agreement in the middle and gender expectation on the bottom row. Here the y-axis shows either plural expectation or masculine expectation, with the various conditions are on the x-axis. The results here are strikingly different from those in the control experiments. For number agreement, all models in both languages show strong plural expectation in conditions where the second noun is plural (blue and green bars), as they do in the control experiments. Crucially, when the second noun is singular, the plural expectation is significantly negative for all models (save for the French LSTM (FTB) pl_and_sg condition). Turning to gender agreement, only the LSTM (frWaC) model shows differentiation between the four conditions tested. However, whereas the f_and_m and m_and_f gender expectations are not significantly different from zero in the control condition, in the critical condition they pattern with the purely masculine and purely feminine conditions, indicating that, in this syntactic context, the model has successfully learned to base gender expectation solely off of the second noun.
These results are inconsistent with a simple `bag of features' strategy that is insensitive to local syntactic context. They indicate that both models can interpret the same string as either a coordinated noun phrase, or as an NP object and the start of a coordinated VP with the second NP as its subject.
<<</Complex Coordination Critical>>>
<<</Experiment 3: Complex Coordination>>>
<<<Experiment 4: Inverted Coordination>>>
In addition to using phrase-level features to drive expectation about downstream lexical items, human processors can do the inverse—use lexical features to drive expectations about upcoming syntactic chunks. In this experiment, we assess whether neural models use number features hosted on a verb to modulate their expectations for upcoming CoordNPs.
To assess whether neural language models learn inverted coordination rules, we adapted items from Section SECREF13 in both English (37 items) and French (24 items), following the paradigm in Table TABREF24. The first part of the phrase contains either a plural or singular verb and a plural or singular noun. In this case, we sample the surprisal for the continuations and (or is grammatical in all conditions, so it is omitted from this study). Our expectation is that `and' is less surprising in the Vpl_Nsg condition than in the Vsg_Nsg condition, where a CoordNP is not licensed by the grammar in either French or English (as in *What is the pig and the cat eating?). We also expect lower surprisal for and in the Vpl_Nsg condition, where it is obligatory for a grammatical continuation, than in the Vpl_Npl condition, where it is optional.
For French experimental items, the question is embedded into a sentential-complement taking verb, following Example SECREF6, due to the fact that unembedded subject-verb inverted questions sound very formal and might be relatively rare in the training data.
. Je me demande où vont le maire et
I myself ask where go.3PL the.MSG mayor.MSG and
The results for both languages are shown in Figure FIGREF25, with the surprisal at the coordinator on the y-axis and the various conditions on the x-axis. No model in either language shows a signficant difference in surprisal between the Vpl_Nsg and Vpl_Npl conditions or between the Vpl_Nsg and Vsg_Nsg conditions. The LSTM (1B) shows significant difference between the Vpl_Nsg and Vpl_Npl conditions, but in the opposite direction than expected, with the coordinator less surprising in the latter condition. These results indicate that the models are unable to use the fine-grained context sensitivity to drive expectations for CoordNPs, at least in the inversion setting.
<<</Experiment 4: Inverted Coordination>>>
<<<Discussion>>>
The experiments presented here extend and refine a line of research investigating what linguistic knowledge is acquired by neural language models. Previous studies have demonstrated that sequential models trained on a simple regime of optimizing the next word can learn long-distance syntactic dependencies in impressive detail. Our results provide complimentary insights, demonstrating that a range of model architectures trained on a variety of datasets can learn fine-grained information about the interaction of CoordNPs and local syntactic context, but their behavior remains unhumanlike in many key ways. Furthermore, to our best knowledge, this work presents the first psycholinguistic analysis of neural language models trained on French, a high-resource language that has so far been under-investigated in this line of research.
In the simple coordination experiment, we demonstrated that models were able to capture some of the agreement behaviors of humans, although their performance deviated in crucial aspects. Whereas human behavior is best modeled as a `percolation' process, the neural models appear to be using a linear combination of NP constituent number to drive CoordNP/verb number agreement, with the second noun weighted more heavily than the first. In these experiments, supervision afforded by the RNNG and ActionLSTM models did not translate into more robust or humanlike learning outcomes. The complex coordination experiments provided evidence that the neural models tested were not using a simple `bag of features' strategy, but were sensitive to syntactic context. All models tested were able to interpret material that had similar surface form in ways that corresponded to two different tree-structural descriptions, based on local context. The inverted coordination experiment provided a contrasting example, in which models were unable to modulate expectations based on subtleties in the syntactic environment.
Across all our experiments, the French models performed consistently better on subject/verb number agreement than on subject/predicate gender agreement. Although there are likely more examples of subject/verb number agreement in the French training data, gender agreement is syntactically mandated and widespread in French. It remains an open question why all but one of the models tested were unable to leverage the numerous examples of gender agreement seen in various contexts during training to drive correct subject/predicate expectations.
<<</Discussion>>>
<<<Acknowledgments>>>
This project is supported by a grant of Labex EFL ANR-10-LABX-0083 (and Idex ANR-18-IDEX-0001) for AA and MIT–IBM AI Laboratory and the MIT–SenseTimeAlliance on Artificial Intelligence for RPL. We would like to thank the anonymous reviewers for their comments and Anne Abeillé for her advice and feedback.
<<</Acknowledgments>>>
<<<The Effect of Annotation Schemes>>>
This section further investigates the effects of CoordNP annotation schemes on the behaviors of structurally-supervised models. We test whether an explicit COORD phrasal tag improves model performance. We trained two additional RNNG models on 38,546 sentences from the Penn Treebank annotated with two different schemes: The first, RNNG (PTB-control) was trained with the original Penn Treebank annotation. The second, RNNG (PTB-coord), was trained on the same sentences, but with an extended coordination annotation scheme, meant to employ the scheme employed in the FTB, adapted from BIBREF26. We stripped empty categories from their scheme and only kept the NP-COORD label for constituents inside a coordination structure. Figure FIGREF26 illustrates the detailed annotation differences between two datasets. We tested both models on all the experiments presented in Sections SECREF3-SECREF6 above.
Turning to the results of these six experiments: We see little difference between the two models in the Non-coordination agreement experiment. For the Complex coordination control and Complex coordination critical experiments, both models are largely the same as well. However, in the Simple and-coordination and Simple or-coordination experiments the values for all conditions are shifted upwards for the RNNG PTB-coord model, indicating higher over-all preference for the plural continuation. Furthermore, the range of values is reduced in the RNNG PTB-coord model, compared to the RNNG PTB-control model. These results indicate that adding an explicit COORD phrasal label does not drastically change model performance: Both models still appear to be using a linear combination of number features to drive plural vs. singular expectation. However, the explicit representation has made the interior of the coordination phrase more opaque to the model (each feature matters less) and has slightly shifted model preference towards plural continuations. In this sense, the PTB-coord model may have learned a generalization about CoordNPs, but this generalization remains unlike the ones learned by humans.
<<</The Effect of Annotation Schemes>>>
<<<PTB/FTB Agreement Patterns>>>
We present statistics of subject/predicate agreement patterns in the Penn Treebank (PTB) and French Treebank (FTB) in Table TABREF28 and TABREF29.
<<</PTB/FTB Agreement Patterns>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nMethods\nPsycholinguistics Paradigm\nModels Tested\nRecurrent Neural Network (RNN) Language Models\nActionLSTM\nGenerative Recurrent Neural Network Grammars (RNNG)\nExperiment 1: Non-coordination Agreement\nExperiment 2: Simple Coordination\nNumber Agreement\nGender Agreement\nExperiment 3: Complex Coordination\nComplex Coordination Control\nComplex Coordination Critical\nExperiment 4: Inverted Coordination\nDiscussion\nAcknowledgments\nThe Effect of Annotation Schemes\nPTB/FTB Agreement Patterns"
],
"type": "outline"
}
|
2002.00652
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
How Far are We from Effective Context Modeling ? An Exploratory Study on Semantic Parsing in Context
<<<Abstract>>>
Recently semantic parsing in context has received a considerable attention, which is challenging since there are complex contextual phenomena. Previous works verified their proposed methods in limited scenarios, which motivates us to conduct an exploratory study on context modeling methods under real-world semantic parsing in context. We present a grammar-based decoding semantic parser and adapt typical context modeling methods on top of it. We evaluate 13 context modeling methods on two large complex cross-domain datasets, and our best model achieves state-of-the-art performances on both datasets with significant improvements. Furthermore, we summarize the most frequent contextual phenomena, with a fine-grained analysis on representative models, which may shed light on potential research directions.
<<</Abstract>>>
<<<Introduction>>>
Semantic parsing, which translates a natural language sentence into its corresponding executable logic form (e.g. Structured Query Language, SQL), relieves users from the burden of learning techniques behind the logic form. The majority of previous studies on semantic parsing assume that queries are context-independent and analyze them in isolation. However, in reality, users prefer to interact with systems in a dialogue, where users are allowed to ask context-dependent incomplete questions BIBREF0. That arises the task of Semantic Parsing in Context (SPC), which is quite challenging as there are complex contextual phenomena. In general, there are two sorts of contextual phenomena in dialogues: Coreference and Ellipsis BIBREF1. Figure FIGREF1 shows a dialogue from the dataset SParC BIBREF2. After the question “What is id of the car with the max horsepower?”, the user poses an elliptical question “How about with the max mpg?”, and a question containing pronouns “Show its Make!”. Only when completely understanding the context, could a parser successfully parse the incomplete questions into their corresponding SQL queries.
A number of context modeling methods have been suggested in the literature to address SPC BIBREF3, BIBREF4, BIBREF2, BIBREF5, BIBREF6. These methods proposed to leverage two categories of context: recent questions and precedent logic form. It is natural to leverage recent questions as context. Taking the example from Figure FIGREF1, when parsing $Q_3$, we also need to take $Q_1$ and $Q_2$ as input. We can either simply concatenate the input questions, or use a model to encode them hierarchically BIBREF4. As for the second category, instead of taking a bag of recent questions as input, it only considers the precedent logic form. For instance, when parsing $Q_3$, we only need to take $S_2$ as context. With such a context, the decoder can attend over it, or reuse it via a copy mechanism BIBREF4, BIBREF5. Intuitively, methods that fall into this category enjoy better generalizability, as they only rely on the last logic form as context, no matter at which turn. Notably, these two categories of context can be used simultaneously.
However, it remains unclear how far we are from effective context modeling. First, there is a lack of thorough comparisons of typical context modeling methods on complex SPC (e.g. cross-domain). Second, none of previous works verified their proposed context modeling methods with the grammar-based decoding technique, which has been developed for years and proven to be highly effective in semantic parsing BIBREF7, BIBREF8, BIBREF9. To obtain better performance, it is worthwhile to study how context modeling methods collaborate with the grammar-based decoding. Last but not the least, there is limited understanding of how context modeling methods perform on various contextual phenomena. An in-depth analysis can shed light on potential research directions.
In this paper, we try to fulfill the above insufficiency via an exploratory study on real-world semantic parsing in context. Concretely, we present a grammar-based decoding semantic parser and adapt typical context modeling methods on top of it. Through experiments on two large complex cross-domain datasets, SParC BIBREF2 and CoSQL BIBREF6, we carefully compare and analyze the performance of different context modeling methods. Our best model achieves state-of-the-art (SOTA) performances on both datasets with significant improvements. Furthermore, we summarize and generalize the most frequent contextual phenomena, with a fine-grained analysis on representative models. Through the analysis, we obtain some interesting findings, which may benefit the community on the potential research directions. We will open-source our code and materials to facilitate future work upon acceptance.
<<</Introduction>>>
<<<Methodology>>>
In the task of semantic parsing in context, we are given a dataset composed of dialogues. Denoting $\langle \mathbf {x}_1,...,\mathbf {x}_n\rangle $ a sequence of natural language questions in a dialogue, $\langle \mathbf {y}_1,...,\mathbf {y}_n\rangle $ are their corresponding SQL queries. Each SQL query is conditioned on a multi-table database schema, and the databases used in test do not appear in training. In this section, we first present a base model without considering context. Then we introduce 6 typical context modeling methods and describe how we equip the base model with these methods. Finally, we present how to augment the model with BERT BIBREF10.
<<<Base Model>>>
We employ the popularly used attention-based sequence-to-sequence architecture BIBREF11, BIBREF12 to build our base model. As shown in Figure FIGREF6, the base model consists of a question encoder and a grammar-based decoder. For each question, the encoder provides contextual representations, while the decoder generates its corresponding SQL query according to a predefined grammar.
<<<Question Encoder>>>
To capture contextual information within a question, we apply Bidirectional Long Short-Term Memory Neural Network (BiLSTM) as our question encoder BIBREF13, BIBREF14. Specifically, at turn $i$, firstly every token $x_{i,k}$ in $\mathbf {x}_{i}$ is fed into a word embedding layer $\mathbf {\phi }^x$ to get its embedding representation $\mathbf {\phi }^x{(x_{i,k})}$. On top of the embedding representation, the question encoder obtains a contextual representation $\mathbf {h}^{E}_{i,k}=[\mathop {{\mathbf {h}}^{\overrightarrow{E}}_{i,k}}\,;{\mathbf {h}}^{\overleftarrow{E}}_{i,k}]$, where the forward hidden state is computed as following:
<<</Question Encoder>>>
<<<Grammar-based Decoder>>>
The decoder is grammar-based with attention on the input question BIBREF7. Different from producing a SQL query word by word, our decoder outputs a sequence of grammar rule (i.e. action). Such a sequence has one-to-one correspondence with the abstract syntax tree of the SQL query. Taking the SQL query in Figure FIGREF6 as an example, it is transformed to the action sequence $\langle $ $\rm \scriptstyle {Start}\rightarrow \rm {Root}$, $\rm \scriptstyle {Root}\rightarrow \rm {Select\ Order}$, $\rm \scriptstyle {Select}\rightarrow \rm {Agg}$, $\rm \scriptstyle {Agg}\rightarrow \rm {max\ Col\ Tab}$, $\rm \scriptstyle {Col}\rightarrow \rm {Id}$, $\rm \scriptstyle {Tab}\rightarrow \rm {CARS\_DATA}$, $\rm \scriptstyle {Order}\rightarrow \rm {desc\ limit\ Agg}$, $\rm \scriptstyle {Agg}\rightarrow \rm {none\ Col\ Tab}$, $\rm \scriptstyle {Col}\rightarrow \rm {Horsepower}$, $\rm \scriptstyle {Tab}\rightarrow \rm {CARS\_DATA}$ $\rangle $ by left-to-right depth-first traversing on the tree. At each decoding step, a nonterminal is expanded using one of its corresponding grammar rules. The rules are either schema-specific (e.g. $\rm \scriptstyle {Col}\rightarrow \rm {Horsepower}$), or schema-agnostic (e.g. $\rm \scriptstyle {Start}\rightarrow \rm {Root}$). More specifically, as shown at the top of Figure FIGREF6, we make a little modification on $\rm {Order}$-related rules upon the grammar proposed by BIBREF9, which has been proven to have better performance than vanilla SQL grammar. Denoting $\mathbf {LSTM}^{\overrightarrow{D}}$ the unidirectional LSTM used in the decoder, at each decoding step $j$ of turn $i$, it takes the embedding of the previous generated grammar rule $\mathbf {\phi }^y(y_{i,j-1})$ (indicated as the dash lines in Figure FIGREF6), and updates its hidden state as:
where $\mathbf {c}_{i,j-1}$ is the context vector produced by attending on each encoder hidden state $\mathbf {h}^E_{i,k}$ in the previous step:
where $\mathbf {W}^e$ is a learned matrix. $\mathbf {h}^{\overrightarrow{D}}_{i,0}$ is initialized by the final encoder hidden state $\mathbf {h}^E_{i,|\mathbf {x}_{i}|}$, while $\mathbf {c}_{i,0}$ is a zero-vector. For each schema-agnostic grammar rule, $\mathbf {\phi }^y$ returns a learned embedding. For schema-specific one, the embedding is obtained by passing its schema (i.e. table or column) through another unidirectional LSTM, namely schema encoder $\mathbf {LSTM}^{\overrightarrow{S}}$. For example, the embedding of $\rm \scriptstyle {Col}\rightarrow \rm {Id}$ is:
As for the output $y_{i,j}$, if the expanded nonterminal corresponds to schema-agnostic grammar rules, we can obtain the output probability of action ${\gamma }$ as:
where $\mathbf {W}^o$ is a learned matrix. When it comes to schema-specific grammar rules, the main challenge is that the model may encounter schemas never appeared in training due to the cross-domain setting. To deal with it, we do not directly compute the similarity between the decoder hidden state and the schema-specific grammar rule embedding. Instead, we first obtain the unnormalized linking score $l(x_{i,k},\gamma )$ between the $k$-th token in $\mathbf {x}_i$ and the schema in action $\gamma $. It is computed by both handcraft features (e.g. word exact match) BIBREF15 and learned similarity (i.e. dot product between word embedding and grammar rule embedding). With the input question as bridge, we reuse the attention score $a_{i,k}$ in Equation DISPLAY_FORM8 to measure the probability of outputting a schema-specific action $\gamma $ as:
<<</Grammar-based Decoder>>>
<<</Base Model>>>
<<<Recent Questions as Context>>>
To take advantage of the question context, we provide the base model with recent $h$ questions as additional input. As shown in Figure FIGREF13, we summarize and generalize three ways to incorporate recent questions as context.
<<<Concat>>>
The method concatenates recent questions with the current question in order, making the input of the question encoder be $[\mathbf {x}_{i-h},\dots ,\mathbf {x}_{i}]$, while the architecture of the base model remains the same. We do not insert special delimiters between questions, as there are punctuation marks.
<<</Concat>>>
<<<Turn>>>
A dialogue can be seen as a sequence of questions which, in turn, are sequences of words. Considering such hierarchy, BIBREF4 employed a turn-level encoder (i.e. an unidirectional LSTM) to encode recent questions hierarchically. At turn $i$, the turn-level encoder takes the previous question vector $[\mathbf {h}^{\overleftarrow{E}}_{i-1,1},\mathbf {h}^{\overrightarrow{E}}_{i-1,|\mathbf {x}_{i-1}|}]$ as input, and updates its hidden state to $\mathbf {h}^{\overrightarrow{T}}_{i}$. Then $\mathbf {h}^{\overrightarrow{T}}_{i}$ is fed into $\mathbf {LSTM}^E$ as an implicit context. Accordingly Equation DISPLAY_FORM4 is rewritten as:
Similar to Concat, BIBREF4 allowed the decoder to attend over all encoder hidden states. To make the decoder distinguish hidden states from different turns, they further proposed a relative distance embedding ${\phi }^{d}$ in attention computing. Taking the above into account, Equation DISPLAY_FORM8 is as:
where $t{\in }[0,\dots ,h]$ represents the relative distance.
<<</Turn>>>
<<<Gate>>>
To jointly model the decoder attention in token-level and question-level, inspired by the advances of open-domain dialogue area BIBREF16, we propose a gate mechanism to automatically compute the importance of each question. The importance is computed by:
where $\lbrace \mathbf {V}^{g},\mathbf {W}^g,\mathbf {U}^g\rbrace $ are learned parameters and $0\,{\le }\,t\,{\le }\,h$. As done in Equation DISPLAY_FORM17 except for the relative distance embedding, the decoder of Gate also attends over all the encoder hidden states. And the question-level importance $\bar{g}_{i-t}$ is employed as the coefficient of the attention scores at turn $i\!-\!t$.
<<</Gate>>>
<<</Recent Questions as Context>>>
<<<Precedent SQL as Context>>>
Besides recent questions, as mentioned in Section SECREF1, the precedent SQL can also be context. As shown in Figure FIGREF27, the usage of $\mathbf {y}_{i-1}$ requires a SQL encoder, where we employ another BiLSTM to achieve it. The $m$-th contextual action representation at turn $i\!-\!1$, $\mathbf {h}^A_{i-1,m}$, can be obtained by passing the action sequence through the SQL encoder.
<<<SQL Attn>>>
Attention over $\mathbf {y}_{i-1}$ is a straightforward method to incorporate the SQL context. Given $\mathbf {h}^A_{i-1,m}$, we employ a similar manner as Equation DISPLAY_FORM8 to compute attention score and thus obtain the SQL context vector. This vector is employed as an additional input for decoder in Equation DISPLAY_FORM7.
<<</SQL Attn>>>
<<<Action Copy>>>
To reuse the precedent generated SQL, BIBREF5 presented a token-level copy mechanism on their non-grammar based parser. Inspired by them, we propose an action-level copy mechanism suited for grammar-based decoding. It enables the decoder to copy actions appearing in $\mathbf {y}_{i-1}$, when the actions are compatible to the current expanded nonterminal. As the copied actions lie in the same semantic space with the generated ones, the output probability for action $\gamma $ is a mix of generating ($\mathbf {g}$) and copying ($\mathbf {c}$). The generating probability $P(y_{i,j}\!=\!{\gamma }\,|\,\mathbf {g})$ follows Equation DISPLAY_FORM10 and DISPLAY_FORM11, while the copying probability is:
where $\mathbf {W}^l$ is a learned matrix. Denoting $P^{copy}_{i,j}$ the probability of copying at decoding step $j$ of turn $i$, it can be obtained by $\sigma (\mathbf {W}^{c}\mathbf {h}^{\overrightarrow{D}}_{i,j}+\mathbf {b}^{c})$, where $\lbrace \mathbf {W}^{c},\mathbf {b}^{c}\rbrace $ are learned parameters and $\sigma $ is the sigmoid function. The final probability $P(y_{i,j}={\gamma })$ is computed by:
<<</Action Copy>>>
<<<Tree Copy>>>
Besides the action-level copy, we also introduce a tree-level copy mechanism. As illustrated in Figure FIGREF27, tree-level copy mechanism enables the decoder to copy action subtrees extracted from $\mathbf {y}_{i-1}$, which shrinks the number of decoding steps by a large margin. Similar idea has been proposed in a non-grammar based decoder BIBREF4. In fact, a subtree is an action sequence starting from specific nonterminals, such as ${\rm Select}$. To give an example, $\langle $ $\rm \scriptstyle {Select}\rightarrow \rm {Agg}$, $\rm \scriptstyle {Agg}\rightarrow \rm {max\ Col\ Tab}$, $\rm \scriptstyle {Col}\rightarrow \rm {Id}$, $\rm \scriptstyle {Tab}\rightarrow \rm {CARS\_DATA}$ $\rangle $ makes up a subtree for the tree in Figure FIGREF6. For a subtree $\upsilon $, its representation $\phi ^{t}(\upsilon )$ is the final hidden state of SQL encoder, which encodes its corresponding action sequence. Then we can obtain the output probability of subtree $\upsilon $ as:
where $\mathbf {W}^t$ is a learned matrix. The output probabilities of subtrees are normalized together with Equation DISPLAY_FORM10 and DISPLAY_FORM11.
<<</Tree Copy>>>
<<</Precedent SQL as Context>>>
<<<BERT Enhanced Embedding>>>
We employ BERT BIBREF10 to augment our model via enhancing the embedding of questions and schemas. We first concatenate the input question and all the schemas in a deterministic order with [SEP] as delimiter BIBREF17. For instance, the input for $Q_1$ in Figure FIGREF1 is “What is id ... max horsepower? [SEP] CARS_NAMES [SEP] MakeId ... [SEP] Horsepower”. Feeding it into BERT, we obtain the schema-aware question representations and question-aware schema representations. These contextual representations are used to substitute $\phi ^x$ subsequently, while other parts of the model remain the same.
<<</BERT Enhanced Embedding>>>
<<</Methodology>>>
<<<Experiment & Analysis>>>
We conduct experiments to study whether the introduced methods are able to effectively model context in the task of SPC (Section SECREF36), and further perform a fine-grained analysis on various contextual phenomena (Section SECREF40).
<<<Experimental Setup>>>
<<<Dataset>>>
Two large complex cross-domain datasets are used: SParC BIBREF2 consists of 3034 / 422 dialogues for train / development, and CoSQL BIBREF6 consists of 2164 / 292 ones. The average turn numbers of SParC and CoSQL are $3.0$ and $5.2$, respectively.
<<</Dataset>>>
<<<Evaluation Metrics>>>
We evaluate each predicted SQL query using exact set match accuracy BIBREF2. Based on it, we consider three metrics: Question Match (Ques.Match), the match accuracy over all questions, Interaction Match (Int.Match), the match accuracy over all dialogues, and Turn $i$ Match, the match accuracy over questions at turn $i$.
<<</Evaluation Metrics>>>
<<<Implementation Detail>>>
Our implementation is based on PyTorch BIBREF18, AllenNLP BIBREF19 and the library transformers BIBREF20. We adopt the Adam optimizer and set the learning rate as 1e-3 on all modules except for BERT, for which a learning rate of 1e-5 is used BIBREF21. The dimensions of word embedding, action embedding and distance embedding are 100, while the hidden state dimensions of question encoder, grammar-based decoder, turn-level encoder and SQL encoder are 200. We initialize word embedding using Glove BIBREF22 for non-BERT models. For methods which use recent $h$ questions, $h$ is set as 5 on both datasets.
<<</Implementation Detail>>>
<<<Baselines>>>
We consider three models as our baselines. SyntaxSQL-con and CD-Seq2Seq are two strong baselines introduced in the SParC dataset paper BIBREF2. SyntaxSQL-con employs a BiLSTM model to encode dialogue history upon the SyntaxSQLNet model (analogous to our Turn) BIBREF23, while CD-Seq2Seq is adapted from BIBREF4 for cross-domain settings (analogous to our Turn+Tree Copy). EditSQL BIBREF5 is a STOA baseline which mainly makes use of SQL attention and token-level copy (analogous to our Turn+SQL Attn+Action Copy).
<<</Baselines>>>
<<</Experimental Setup>>>
<<<Model Comparison>>>
Taking Concat as a representative, we compare the performance of our model with other models, as shown in Table TABREF34. As illustrated, our model outperforms baselines by a large margin with or without BERT, achieving new SOTA performances on both datasets. Compared with the previous SOTA without BERT on SParC, our model improves Ques.Match and Int.Match by $10.6$ and $5.4$ points, respectively.
To conduct a thorough comparison, we evaluate 13 different context modeling methods upon the same parser, including 6 methods introduced in Section SECREF2 and 7 selective combinations of them (e.g., Concat+Action Copy). The experimental results are presented in Figure FIGREF37. Taken as a whole, it is very surprising to observe that none of these methods can be consistently superior to the others. The experimental results on BERT-based models show the same trend. Diving deep into the methods only using recent questions as context, we observe that Concat and Turn perform competitively, outperforming Gate by a large margin. With respect to the methods only using precedent SQL as context, Action Copy significantly surpasses Tree Copy and SQL Attn in all metrics. In addition, we observe that there is little difference in the performance of Action Copy and Concat, which implies that using precedent SQL as context gives almost the same effect with using recent questions. In terms of the combinations of different context modeling methods, they do not significantly improve the performance as we expected.
As mentioned in Section SECREF1, intuitively, methods which only use the precedent SQL enjoys better generalizability. To validate it, we further conduct an out-of-distribution experiment to assess the generalizability of different context modeling methods. Concretely, we select three representative methods and train them on questions at turn 1 and 2, whereas test them at turn 3, 4 and beyond. As shown in Figure FIGREF38, Action Copy has a consistently comparable or better performance, validating the intuition. Meanwhile, Concat appears to be strikingly competitive, demonstrating it also has a good generalizability. Compared with them, Turn is more vulnerable to out-of-distribution questions.
In conclusion, existing context modeling methods in the task of SPC are not as effective as expected, since they do not show a significant advantage over the simple concatenation method.
<<</Model Comparison>>>
<<<Fine-grained Analysis>>>
By a careful investigation on contextual phenomena, we summarize them in multiple hierarchies. Roughly, there are three kinds of contextual phenomena in questions: semantically complete, coreference and ellipsis. Semantically complete means a question can reflect all the meaning of its corresponding SQL. Coreference means a question contains pronouns, while ellipsis means the question cannot reflect all of its SQL, even if resolving its pronouns. In the fine-grained level, coreference can be divided into 5 types according to its pronoun BIBREF1. Ellipsis can be characterized by its intention: continuation and substitution. Continuation is to augment extra semantics (e.g. ${\rm Filter}$), and substitution refers to the situation where current question is intended to substitute particular semantics in the precedent question. Substitution can be further branched into 4 types: explicit vs. implicit and schema vs. operator. Explicit means the current question provides contextual clues (i.e. partial context overlaps with the precedent question) to help locate the substitution target, while implicit does not. On most cases, the target is schema or operator. In order to study the effect of context modeling methods on various phenomena, as shown in Table TABREF39, we take the development set of SParC as an example to perform our analysis. The analysis begins by presenting Ques.Match of three representative models on above fine-grained types in Figure FIGREF42. As shown, though different methods have different strengths, they all perform poorly on certain types, which will be elaborated below.
<<<Coreference>>>
Diving deep into the coreference (left of Figure FIGREF42), we observe that all methods struggle with two fine-grained types: definite noun phrases and one anaphora. Through our study, we find the scope of antecedent is a key factor. An antecedent is one or more entities referred by a pronoun. Its scope is either whole, where the antecedent is the precedent answer, or partial, where the antecedent is part of the precedent question. The above-mentioned fine-grained types are more challenging as their partial proportion are nearly $40\%$, while for demonstrative pronoun it is only $22\%$. It is reasonable as partial requires complex inference on context. Considering the 4th example in Table TABREF39, “one” refers to “pets” instead of “age” because the accompanying verb is “weigh”. From this observation, we draw the conclusion that current context modeling methods do not succeed on pronouns which require complex inference on context.
<<</Coreference>>>
<<<Ellipsis>>>
As for ellipsis (right of Figure FIGREF42), we obtain three interesting findings by comparisons in three aspects. The first finding is that all models have a better performance on continuation than substitution. This is expected since there are redundant semantics in substitution, while not in continuation. Considering the 8th example in Table TABREF39, “horsepower” is a redundant semantic which may raise noise in SQL prediction. The second finding comes from the unexpected drop from implicit(substitution) to explicit(substitution). Intuitively, explicit should surpass implicit on substitution as it provides more contextual clues. The finding demonstrates that contextual clues are obviously not well utilized by the context modeling methods. Third, compared with schema(substitution), operator(substitution) achieves a comparable or better performance consistently. We believe it is caused by the cross-domain setting, which makes schema related substitution more difficult.
<<</Ellipsis>>>
<<</Fine-grained Analysis>>>
<<</Experiment & Analysis>>>
<<<Related Work>>>
The most related work is the line of semantic parsing in context. In the topic of SQL, BIBREF24 proposed a context-independent CCG parser and then applied it to do context-dependent substitution, BIBREF3 applied a search-based method for sequential questions, and BIBREF4 provided the first sequence-to-sequence solution in the area. More recently, BIBREF5 presented a edit-based method to reuse the precedent generated SQL. With respect to other logic forms, BIBREF25 focuses on understanding execution commands in context, BIBREF26 on question answering over knowledge base in a conversation, and BIBREF27 on code generation in environment context. Our work is different from theirs as we perform an exploratory study, not fulfilled by previous works.
There are also several related works that provided studies on context. BIBREF17 explored the contextual representations in context-independent semantic parsing, and BIBREF28 studied how conversational agents use conversation history to generate response. Different from them, our task focuses on context modeling for semantic parsing. Under the same task, BIBREF1 summarized contextual phenomena in a coarse-grained level, while BIBREF0 performed a wizard-of-oz experiment to study the most frequent phenomena. What makes our work different from them is that we not only summarize contextual phenomena by fine-grained types, but also perform an analysis on context modeling methods.
<<</Related Work>>>
<<<Conclusion & Future Work>>>
This work conducts an exploratory study on semantic parsing in context, to realize how far we are from effective context modeling. Through a thorough comparison, we find that existing context modeling methods are not as effective as expected. A simple concatenation method can be much competitive. Furthermore, by performing a fine-grained analysis, we summarize two potential directions as our future work: incorporating common sense for better pronouns inference, and modeling contextual clues in a more explicit manner. By open-sourcing our code and materials, we believe our work can facilitate the community to debug models in a fine-grained level and make more progress.
<<</Conclusion & Future Work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nMethodology\nBase Model\nQuestion Encoder\nGrammar-based Decoder\nRecent Questions as Context\nConcat\nTurn\nGate\nPrecedent SQL as Context\nSQL Attn\nAction Copy\nTree Copy\nBERT Enhanced Embedding\nExperiment & Analysis\nExperimental Setup\nDataset\nEvaluation Metrics\nImplementation Detail\nBaselines\nModel Comparison\nFine-grained Analysis\nCoreference\nEllipsis\nRelated Work\nConclusion & Future Work"
],
"type": "outline"
}
|
1909.00324
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
A Novel Aspect-Guided Deep Transition Model for Aspect Based Sentiment Analysis
<<<Abstract>>>
Aspect based sentiment analysis (ABSA) aims to identify the sentiment polarity towards the given aspect in a sentence, while previous models typically exploit an aspect-independent (weakly associative) encoder for sentence representation generation. In this paper, we propose a novel Aspect-Guided Deep Transition model, named AGDT, which utilizes the given aspect to guide the sentence encoding from scratch with the specially-designed deep transition architecture. Furthermore, an aspect-oriented objective is designed to enforce AGDT to reconstruct the given aspect with the generated sentence representation. In doing so, our AGDT can accurately generate aspect-specific sentence representation, and thus conduct more accurate sentiment predictions. Experimental results on multiple SemEval datasets demonstrate the effectiveness of our proposed approach, which significantly outperforms the best reported results with the same setting.
<<</Abstract>>>
<<<Introduction>>>
Aspect based sentiment analysis (ABSA) is a fine-grained task in sentiment analysis, which can provide important sentiment information for other natural language processing (NLP) tasks. There are two different subtasks in ABSA, namely, aspect-category sentiment analysis and aspect-term sentiment analysis BIBREF0, BIBREF1. Aspect-category sentiment analysis aims at predicting the sentiment polarity towards the given aspect, which is in predefined several categories and it may not appear in the sentence. For instance, in Table TABREF2, the aspect-category sentiment analysis is going to predict the sentiment polarity towards the aspect “food”, which is not appeared in the sentence. By contrast, the goal of aspect-term sentiment analysis is to predict the sentiment polarity over the aspect term which is a subsequence of the sentence. For instance, the aspect-term sentiment analysis will predict the sentiment polarity towards the aspect term “The appetizers”, which is a subsequence of the sentence. Additionally, the number of categories of the aspect term is more than one thousand in the training corpus.
As shown in Table TABREF2, sentiment polarity may be different when different aspects are considered. Thus, the given aspect (term) is crucial to ABSA tasks BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6. Besides, BIBREF7 show that not all words of a sentence are useful for the sentiment prediction towards a given aspect (term). For instance, when the given aspect is the “service”, the words “appetizers” and “ok” are irrelevant for the sentiment prediction. Therefore, an aspect-independent (weakly associative) encoder may encode such background words (e.g., “appetizers” and “ok”) into the final representation, which may lead to an incorrect prediction.
Numerous existing models BIBREF8, BIBREF9, BIBREF10, BIBREF1 typically utilize an aspect-independent encoder to generate the sentence representation, and then apply the attention mechanism BIBREF11 or gating mechanism to conduct feature selection and extraction, while feature selection and extraction may base on noised representations. In addition, some models BIBREF12, BIBREF13, BIBREF14 simply concatenate the aspect embedding with each word embedding of the sentence, and then leverage conventional Long Short-Term Memories (LSTMs) BIBREF15 to generate the sentence representation. However, it is insufficient to exploit the given aspect and conduct potentially complex feature selection and extraction.
To address this issue, we investigate a novel architecture to enhance the capability of feature selection and extraction with the guidance of the given aspect from scratch. Based on the deep transition Gated Recurrent Unit (GRU) BIBREF16, BIBREF17, BIBREF18, BIBREF19, an aspect-guided GRU encoder is thus proposed, which utilizes the given aspect to guide the sentence encoding procedure at the very beginning stage. In particular, we specially design an aspect-gate for the deep transition GRU to control the information flow of each token input, with the aim of guiding feature selection and extraction from scratch, i.e. sentence representation generation. Furthermore, we design an aspect-oriented objective to enforce our model to reconstruct the given aspect, with the sentence representation generated by the aspect-guided encoder. We name this Aspect-Guided Deep Transition model as AGDT. With all the above contributions, our AGDT can accurately generate an aspect-specific representation for a sentence, and thus conduct more accurate sentiment predictions towards the given aspect.
We evaluate the AGDT on multiple datasets of two subtasks in ABSA. Experimental results demonstrate the effectiveness of our proposed approach. And the AGDT significantly surpasses existing models with the same setting and achieves state-of-the-art performance among the models without using additional features (e.g., BERT BIBREF20). Moreover, we also provide empirical and visualization analysis to reveal the advantages of our model. Our contributions can be summarized as follows:
We propose an aspect-guided encoder, which utilizes the given aspect to guide the encoding of a sentence from scratch, in order to conduct the aspect-specific feature selection and extraction at the very beginning stage.
We propose an aspect-reconstruction approach to further guarantee that the aspect-specific information has been fully embedded into the sentence representation.
Our AGDT substantially outperforms previous systems with the same setting, and achieves state-of-the-art results on benchmark datasets compared to those models without leveraging additional features (e.g., BERT).
<<</Introduction>>>
<<<Model Description>>>
As shown in Figure FIGREF6, the AGDT model mainly consists of three parts: aspect-guided encoder, aspect-reconstruction and aspect concatenated embedding. The aspect-guided encoder is specially designed to guide the encoding of a sentence from scratch for conducting the aspect-specific feature selection and extraction at the very beginning stage. The aspect-reconstruction aims to guarantee that the aspect-specific information has been fully embedded in the sentence representation for more accurate predictions. The aspect concatenated embedding part is used to concatenate the aspect embedding and the generated sentence representation so as to make the final prediction.
<<<Aspect-Guided Encoder>>>
The aspect-guided encoder is the core module of AGDT, which consists of two key components: Aspect-guided GRU and Transition GRU BIBREF16.
A-GRU: Aspect-guided GRU (A-GRU) is a specially-designed unit for the ABSA tasks, which is an extension of the L-GRU proposed by BIBREF19. In particular, we design an aspect-gate to select aspect-specific representations through controlling the transformation scale of token embeddings at each time step.
At time step $t$, the hidden state $\mathbf {h}_{t}$ is computed as follows:
where $\odot $ represents element-wise product; $\mathbf {z}_{t}$ is the update gate BIBREF16; and $\widetilde{\mathbf {h}}_{t}$ is the candidate activation, which is computed as:
where $\mathbf {g}_{t}$ denotes the aspect-gate; $\mathbf {x}_{t}$ represents the input word embedding at time step $t$; $\mathbf {r}_{t}$ is the reset gate BIBREF16; $\textbf {H}_1(\mathbf {x}_{t})$ and $\textbf {H}_2(\mathbf {x}_{t})$ are the linear transformation of the input $\mathbf {x}_{t}$, and $\mathbf {l}_{t}$ is the linear transformation gate for $\mathbf {x}_{t}$ BIBREF19. $\mathbf {r}_{t}$, $\mathbf {z}_{t}$, $\mathbf {l}_{t}$, $\mathbf {g}_{t}$, $\textbf {H}_{1}(\mathbf {x}_{t})$ and $\textbf {H}_{2}(\mathbf {x}_{t})$ are computed as:
where “$\mathbf {a}$" denotes the embedding of the given aspect, which is the same at each time step. The update gate $\mathbf {z}_t$ and reset gate $\mathbf {r}_t$ are the same as them in the conventional GRU.
In Eq. (DISPLAY_FORM9) $\sim $ (), the aspect-gate $\mathbf {g}_{t}$ controls both nonlinear and linear transformations of the input $\mathbf {x}_{t}$ under the guidance of the given aspect at each time step. Besides, we also exploit a linear transformation gate $\mathbf {l}_{t}$ to control the linear transformation of the input, according to the current input $\mathbf {x}_t$ and previous hidden state $\mathbf {h}_{t-1}$, which has been proved powerful in the deep transition architecture BIBREF19.
As a consequence, A-GRU can control both non-linear transformation and linear transformation for input $\mathbf {x}_{t}$ at each time step, with the guidance of the given aspect, i.e., A-GRU can guide the encoding of aspect-specific features and block the aspect-irrelevant information at the very beginning stage.
T-GRU: Transition GRU (T-GRU) BIBREF17 is a crucial component of deep transition block, which is a special case of GRU with only “state” as an input, namely its input embedding is zero embedding. As in Figure FIGREF6, a deep transition block consists of an A-GRU followed by several T-GRUs at each time step. For the current time step $t$, the output of one A-GRU/T-GRU is fed into the next T-GRU as the input. The output of the last T-GRU at time step $t$ is fed into A-GRU at the time step $t+1$. For a T-GRU, each hidden state at both time step $t$ and transition depth $i$ is computed as:
where the update gate $\mathbf {z}_{t}^i$ and the reset gate $\mathbf {r}_{t}^i$ are computed as:
The AGDT encoder is based on deep transition cells, where each cell is composed of one A-GRU at the bottom, followed by several T-GRUs. Such AGDT model can encode the sentence representation with the guidance of aspect information by utilizing the specially designed architecture.
<<</Aspect-Guided Encoder>>>
<<<Aspect-Reconstruction>>>
We propose an aspect-reconstruction approach to guarantee the aspect-specific information has been fully embedded in the sentence representation. Particularly, we devise two objectives for two subtasks in ABSA respectively. In terms of aspect-category sentiment analysis datasets, there are only several predefined aspect categories. While in aspect-term sentiment analysis datasets, the number of categories of term is more than one thousand. In a real-life scenario, the number of term is infinite, while the words that make up terms are limited. Thus we design different loss-functions for these two scenarios.
For the aspect-category sentiment analysis task, we aim to reconstruct the aspect according to the aspect-specific representation. It is a multi-class problem. We take the softmax cross-entropy as the loss function:
where C1 is the number of predefined aspects in the training example; ${y}_{i}^{c}$ is the ground-truth and ${p}_{i}^{c}$ is the estimated probability of a aspect.
For the aspect-term sentiment analysis task, we intend to reconstruct the aspect term (may consist of multiple words) according to the aspect-specific representation. It is a multi-label problem and thus the sigmoid cross-entropy is applied:
where C2 denotes the number of words that constitute all terms in the training example, ${y}_{i}^{t}$ is the ground-truth and ${p}_{i}^{t}$ represents the predicted value of a word.
Our aspect-oriented objective consists of $\mathcal {L}_{c}$ and $\mathcal {L}_{t}$, which guarantee that the aspect-specific information has been fully embedded into the sentence representation.
<<</Aspect-Reconstruction>>>
<<<Training Objective>>>
The final loss function is as follows:
where the underlined part denotes the conventional loss function; C is the number of sentiment labels; ${y}_{i}$ is the ground-truth and ${p}_{i}$ represents the estimated probability of the sentiment label; $\mathcal {L}$ is the aspect-oriented objective, where Eq. DISPLAY_FORM14 is for the aspect-category sentiment analysis task and Eq. DISPLAY_FORM15 is for the aspect-term sentiment analysis task. And $\lambda $ is the weight of $\mathcal {L}$.
As shown in Figure FIGREF6, we employ the aspect reconstruction approach to reconstruct the aspect (term), where “softmax” is for the aspect-category sentiment analysis task and “sigmoid” is for the aspect-term sentiment analysis task. Additionally, we concatenate the aspect embedding on the aspect-guided sentence representation to predict the sentiment polarity. Under that loss function (Eq. DISPLAY_FORM17), the AGDT can produce aspect-specific sentence representations.
<<</Training Objective>>>
<<</Model Description>>>
<<<Experiments>>>
<<<Datasets and Metrics>>>
<<<Data Preparation.>>>
We conduct experiments on two datasets of the aspect-category based task and two datasets of the aspect-term based task. For these four datasets, we name the full dataset as “DS". In each “DS", there are some sentences like the example in Table TABREF2, containing different sentiment labels, each of which associates with an aspect (term). For instance, Table TABREF2 shows the customer's different attitude towards two aspects: “food” (“The appetizers") and “service”. In order to measure whether a model can detect different sentiment polarities in one sentence towards different aspects, we extract a hard dataset from each “DS”, named “HDS”, in which each sentence only has different sentiment labels associated with different aspects. When processing the original sentence $s$ that has multiple aspects ${a}_{1},{a}_{2},...,{a}_{n}$ and corresponding sentiment labels ${l}_{1},{l}_{2},...,{l}_{n}$ ($n$ is the number of aspects or terms in a sentence), the sentence will be expanded into (s, ${a}_{1}$, ${l}_{1}$), (s, ${a}_{2}$, ${l}_{2}$), ..., (s, ${a}_{n}$, ${l}_{n}$) in each dataset BIBREF21, BIBREF22, BIBREF1, i.e, there will be $n$ duplicated sentences associated with different aspects and labels.
<<</Data Preparation.>>>
<<<Aspect-Category Sentiment Analysis.>>>
For comparison, we follow BIBREF1 and use the restaurant reviews dataset of SemEval 2014 (“restaurant-14”) Task 4 BIBREF0 to evaluate our AGDT model. The dataset contains five predefined aspects and four sentiment labels. A large dataset (“restaurant-large”) involves restaurant reviews of three years, i.e., 2014 $\sim $ 2016 BIBREF0. There are eight predefined aspects and three labels in that dataset. When creating the “restaurant-large” dataset, we follow the same procedure as in BIBREF1. Statistics of datasets are shown in Table TABREF19.
<<</Aspect-Category Sentiment Analysis.>>>
<<<Aspect-Term Sentiment Analysis.>>>
We use the restaurant and laptop review datasets of SemEval 2014 Task 4 BIBREF0 to evaluate our model. Both datasets contain four sentiment labels. Meanwhile, we also conduct a three-class experiment, in order to compare with some work BIBREF13, BIBREF3, BIBREF7 which removed “conflict” labels. Statistics of both datasets are shown in Table TABREF20.
<<</Aspect-Term Sentiment Analysis.>>>
<<<Metrics.>>>
The evaluation metrics are accuracy. All instances are shown in Table TABREF19 and Table TABREF20. Each experiment is repeated five times. The mean and the standard deviation are reported.
<<</Metrics.>>>
<<</Datasets and Metrics>>>
<<<Implementation Details>>>
We use the pre-trained 300d Glove embeddings BIBREF23 to initialize word embeddings, which is fixed in all models. For out-of-vocabulary words, we randomly sample their embeddings by the uniform distribution $U(-0.25, 0.25)$. Following BIBREF8, BIBREF24, BIBREF25, we take the averaged word embedding as the aspect representation for multi-word aspect terms. The transition depth of deep transition model is 4 (see Section SECREF30). The hidden size is set to 300. We set the dropout rate BIBREF26 to 0.5 for input token embeddings and 0.3 for hidden states. All models are optimized using Adam optimizer BIBREF27 with gradient clipping equals to 5 BIBREF28. The initial learning rate is set to 0.01 and the batch size is set to 4096 at the token level. The weight of the reconstruction loss $\lambda $ in Eq. DISPLAY_FORM17 is fine-tuned (see Section SECREF30) and respectively set to 0.4, 0.4, 0.2 and 0.5 for four datasets.
<<</Implementation Details>>>
<<<Baselines>>>
To comprehensively evaluate our AGDT, we compare the AGDT with several competitive models.
ATAE-LSTM. It is an attention-based LSTM model. It appends the given aspect embedding with each word embedding, and then the concatenated embedding is taken as the input of LSTM. The output of LSTM is appended aspect embedding again. Furthermore, attention is applied to extract features for final predictions.
CNN. This model focuses on extracting n-gram features to generate sentence representation for the sentiment classification.
TD-LSTM. This model uses two LSTMs to capture the left and right context of the term to generate target-dependent representations for the sentiment prediction.
IAN. This model employs two LSTMs and interactive attention mechanism to learn representations of the sentence and the aspect, and concatenates them for the sentiment prediction.
RAM. This model applies multiple attentions and memory networks to produce the sentence representation.
GCAE. It uses CNNs to extract features and then employs two Gated Tanh-Relu units to selectively output the sentiment information flow towards the aspect for predicting sentiment labels.
<<</Baselines>>>
<<<Main Results and Analysis>>>
<<<Aspect-Category Sentiment Analysis Task>>>
We present the overall performance of our model and baseline models in Table TABREF27. Results show that our AGDT outperforms all baseline models on both “restaurant-14” and “restaurant-large” datasets. ATAE-LSTM employs an aspect-weakly associative encoder to generate the aspect-specific sentence representation by simply concatenating the aspect, which is insufficient to exploit the given aspect. Although GCAE incorporates the gating mechanism to control the sentiment information flow according to the given aspect, the information flow is generated by an aspect-independent encoder. Compared with GCAE, our AGDT improves the performance by 2.4% and 1.6% in the “DS” part of the two dataset, respectively. These results demonstrate that our AGDT can sufficiently exploit the given aspect to generate the aspect-guided sentence representation, and thus conduct accurate sentiment prediction. Our model benefits from the following aspects. First, our AGDT utilizes an aspect-guided encoder, which leverages the given aspect to guide the sentence encoding from scratch and generates the aspect-guided representation. Second, the AGDT guarantees that the aspect-specific information has been fully embedded in the sentence representation via reconstructing the given aspect. Third, the given aspect embedding is concatenated on the aspect-guided sentence representation for final predictions.
The “HDS”, which is designed to measure whether a model can detect different sentiment polarities in a sentence, consists of replicated sentences with different sentiments towards multiple aspects. Our AGDT surpasses GCAE by a very large margin (+11.4% and +4.9% respectively) on both datasets. This indicates that the given aspect information is very pivotal to the accurate sentiment prediction, especially when the sentence has different sentiment labels, which is consistent with existing work BIBREF2, BIBREF3, BIBREF4. Those results demonstrate the effectiveness of our model and suggest that our AGDT has better ability to distinguish the different sentiments of multiple aspects compared to GCAE.
<<</Aspect-Category Sentiment Analysis Task>>>
<<<Aspect-Term Sentiment Analysis Task>>>
As shown in Table TABREF28, our AGDT consistently outperforms all compared methods on both domains. In this task, TD-LSTM and ATAE-LSTM use a aspect-weakly associative encoder. IAN, RAM and GCAE employ an aspect-independent encoder. In the “DS” part, our AGDT model surpasses all baseline models, which shows that the inclusion of A-GRU (aspect-guided encoder), aspect-reconstruction and aspect concatenated embedding has an overall positive impact on the classification process.
In the “HDS” part, the AGDT model obtains +3.6% higher accuracy than GCAE on the restaurant domain and +4.2% higher accuracy on the laptop domain, which shows that our AGDT has stronger ability for the multi-sentiment problem against GCAE. These results further demonstrate that our model works well across tasks and datasets.
<<</Aspect-Term Sentiment Analysis Task>>>
<<<Ablation Study>>>
We conduct ablation experiments to investigate the impacts of each part in AGDT, where the GRU is stacked with 4 layers. Here “AC” represents aspect concatenated embedding , “AG” stands for A-GRU (Eq. (DISPLAY_FORM8) $\sim $ ()) and “AR” denotes the aspect-reconstruction (Eq. (DISPLAY_FORM14) $\sim $ (DISPLAY_FORM17)).
From Table TABREF31 and Table TABREF32, we can conclude:
Deep Transition (DT) achieves superior performances than GRU, which is consistent with previous work BIBREF18, BIBREF19 (2 vs. 1).
Utilizing “AG” to guide encoding aspect-related features from scratch has a significant impact for highly competitive results and particularly in the “HDS” part, which demonstrates that it has the stronger ability to identify different sentiment polarities towards different aspects. (3 vs. 2).
Aspect concatenated embedding can promote the accuracy to a degree (4 vs. 3).
The aspect-reconstruction approach (“AR”) substantially improves the performance, especially in the “HDS" part (5 vs. 4).
the results in 6 show that all modules have an overall positive impact on the sentiment classification.
<<</Ablation Study>>>
<<<Impact of Model Depth>>>
We have demonstrated the effectiveness of the AGDT. Here, we investigate the impact of model depth of AGDT, varying the depth from 1 to 6. Table TABREF39 shows the change of accuracy on the test sets as depth increases. We find that the best results can be obtained when the depth is equal to 4 at most case, and further depth do not provide considerable performance improvement.
<<</Impact of Model Depth>>>
<<<Effectiveness of Aspect-reconstruction Approach>>>
Here, we investigate how well the AGDT can reconstruct the aspect information. For the aspect-term reconstruction, we count the construction is correct when all words of the term are reconstructed. Table TABREF40 shows all results on four test datasets, which shows the effectiveness of aspect-reconstruction approach again.
<<</Effectiveness of Aspect-reconstruction Approach>>>
<<<Impact of Loss Weight @!START@$\lambda $@!END@>>>
We randomly sample a temporary development set from the “HDS" part of the training set to choose the lambda for each dataset. And we investigate the impact of $\lambda $ for aspect-oriented objectives. Specifically, $\lambda $ is increased from 0.1 to 1.0. Figure FIGREF33 illustrates all results on four “HDS" datasets, which show that reconstructing the given aspect can enhance aspect-specific sentiment features and thus obtain better performances.
<<</Impact of Loss Weight @!START@$\lambda $@!END@>>>
<<<Comparison on Three-Class for the Aspect-Term Sentiment Analysis Task>>>
We also conduct a three-class experiment to compare our AGDT with previous models, i.e., IARM, TNet, VAE, PBAN, AOA and MGAN, in Table TABREF41. These previous models are based on an aspect-independent (weakly associative) encoder to generate sentence representations. Results on all domains suggest that our AGDT substantially outperforms most competitive models, except for the TNet on the laptop dataset. The reason may be TNet incorporates additional features (e.g., position features, local ngrams and word-level features) compared to ours (only word-level features).
<<</Comparison on Three-Class for the Aspect-Term Sentiment Analysis Task>>>
<<</Main Results and Analysis>>>
<<</Experiments>>>
<<<Analysis and Discussion>>>
<<<Case Study and Visualization.>>>
To give an intuitive understanding of how the proposed A-GRU works from scratch with different aspects, we take a review sentence as an example. As the example “the appetizers are ok, but the service is slow.” shown in Table TABREF2, it has different sentiment labels towards different aspects. The color depth denotes the semantic relatedness level between the given aspect and each word. More depth means stronger relation to the given aspect.
Figure FIGREF43 shows that the A-GRU can effectively guide encoding the aspect-related features with the given aspect and identify corresponding sentiment. In another case, “overpriced Japanese food with mediocre service.”, there are two extremely strong sentiment words. As the above of Figure FIGREF44 shows, our A-GRU generates almost the same weight to the word “overpriced” and “mediocre”. The bottom of Figure FIGREF44 shows that reconstructing the given aspect can effectively enhance aspect-specific sentiment features and produce correct sentiment predictions.
<<</Case Study and Visualization.>>>
<<<Error Analysis.>>>
We further investigate the errors from AGDT, which can be roughly divided into 3 types. 1) The decision boundary among the sentiment polarity is unclear, even the annotators can not sure what sentiment orientation over the given aspect in the sentence. 2) The “conflict/neutral” instances are extremely easily misclassified as “positive” or “negative”, due to the imbalanced label distribution in training corpus. 3) The polarity of complex instances is hard to predict, such as the sentence that express subtle emotions, which are hardly effectively captured, or containing negation words (e.g., never, less and not), which easily affect the sentiment polarity.
<<</Error Analysis.>>>
<<</Analysis and Discussion>>>
<<<Related Work>>>
<<<Sentiment Analysis.>>>
There are kinds of sentiment analysis tasks, such as document-level BIBREF34, sentence-level BIBREF35, BIBREF36, aspect-level BIBREF0, BIBREF37 and multimodal BIBREF38, BIBREF39 sentiment analysis. For the aspect-level sentiment analysis, previous work typically apply attention mechanism BIBREF11 combining with memory network BIBREF40 or gating units to solve this task BIBREF8, BIBREF41, BIBREF42, BIBREF1, BIBREF43, BIBREF44, BIBREF45, BIBREF46, where an aspect-independent encoder is used to generate the sentence representation. In addition, some work leverage the aspect-weakly associative encoder to generate aspect-specific sentence representation BIBREF12, BIBREF13, BIBREF14. All of these methods make insufficient use of the given aspect information. There are also some work which jointly extract the aspect term (and opinion term) and predict its sentiment polarity BIBREF47, BIBREF48, BIBREF49, BIBREF50, BIBREF51, BIBREF52, BIBREF53, BIBREF54, BIBREF55. In this paper, we focus on the latter problem and leave aspect extraction BIBREF56 to future work. And some work BIBREF57, BIBREF58, BIBREF59, BIBREF30, BIBREF60, BIBREF51 employ the well-known BERT BIBREF20 or document-level corpora to enhance ABSA tasks, which will be considered in our future work to further improve the performance.
<<</Sentiment Analysis.>>>
<<<Deep Transition.>>>
Deep transition has been proved its superiority in language modeling BIBREF17 and machine translation BIBREF18, BIBREF19. We follow the deep transition architecture in BIBREF19 and extend it by incorporating a novel A-GRU for ABSA tasks.
<<</Deep Transition.>>>
<<</Related Work>>>
<<<Conclusions>>>
In this paper, we propose a novel aspect-guided encoder (AGDT) for ABSA tasks, based on a deep transition architecture. Our AGDT can guide the sentence encoding from scratch for the aspect-specific feature selection and extraction. Furthermore, we design an aspect-reconstruction approach to enforce AGDT to reconstruct the given aspect with the generated sentence representation. Empirical studies on four datasets suggest that the AGDT outperforms existing state-of-the-art models substantially on both aspect-category sentiment analysis task and aspect-term sentiment analysis task of ABSA without additional features.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nModel Description\nAspect-Guided Encoder\nAspect-Reconstruction\nTraining Objective\nExperiments\nDatasets and Metrics\nData Preparation.\nAspect-Category Sentiment Analysis.\nAspect-Term Sentiment Analysis.\nMetrics.\nImplementation Details\nBaselines\nMain Results and Analysis\nAspect-Category Sentiment Analysis Task\nAspect-Term Sentiment Analysis Task\nAblation Study\nImpact of Model Depth\nEffectiveness of Aspect-reconstruction Approach\nImpact of Loss Weight @!START@$\\lambda $@!END@\nComparison on Three-Class for the Aspect-Term Sentiment Analysis Task\nAnalysis and Discussion\nCase Study and Visualization.\nError Analysis.\nRelated Work\nSentiment Analysis.\nDeep Transition.\nConclusions"
],
"type": "outline"
}
|
2004.03034
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
The Role of Pragmatic and Discourse Context in Determining Argument Impact
<<<Abstract>>>
Research in the social sciences and psychology has shown that the persuasiveness of an argument depends not only the language employed, but also on attributes of the source/communicator, the audience, and the appropriateness and strength of the argument's claims given the pragmatic and discourse context of the argument. Among these characteristics of persuasive arguments, prior work in NLP does not explicitly investigate the effect of the pragmatic and discourse context when determining argument quality. This paper presents a new dataset to initiate the study of this aspect of argumentation: it consists of a diverse collection of arguments covering 741 controversial topics and comprising over 47,000 claims. We further propose predictive models that incorporate the pragmatic and discourse context of argumentative claims and show that they outperform models that rely only on claim-specific linguistic features for predicting the perceived impact of individual claims within a particular line of argument.
<<</Abstract>>>
<<<Introduction>>>
Previous work in the social sciences and psychology has shown that the impact and persuasive power of an argument depends not only on the language employed, but also on the credibility and character of the communicator (i.e. ethos) BIBREF0, BIBREF1, BIBREF2; the traits and prior beliefs of the audience BIBREF3, BIBREF4, BIBREF5, BIBREF6; and the pragmatic context in which the argument is presented (i.e. kairos) BIBREF7, BIBREF8.
Research in Natural Language Processing (NLP) has only partially corroborated these findings. One very influential line of work, for example, develops computational methods to automatically determine the linguistic characteristics of persuasive arguments BIBREF9, BIBREF10, BIBREF11, but it does so without controlling for the audience, the communicator or the pragmatic context.
Very recent work, on the other hand, shows that attributes of both the audience and the communicator constitute important cues for determining argument strength BIBREF12, BIBREF13. They further show that audience and communicator attributes can influence the relative importance of linguistic features for predicting the persuasiveness of an argument. These results confirm previous findings in the social sciences that show a person's perception of an argument can be influenced by his background and personality traits.
To the best of our knowledge, however, no NLP studies explicitly investigate the role of kairos — a component of pragmatic context that refers to the context-dependent “timeliness" and “appropriateness" of an argument and its claims within an argumentative discourse — in argument quality prediction. Among the many social science studies of attitude change, the order in which argumentative claims are shared with the audience has been studied extensively: 10.1086/209393, for example, summarize studies showing that the argument-related claims a person is exposed to beforehand can affect his perception of an alternative argument in complex ways. article-3 similarly find that changes in an argument's context can have a big impact on the audience's perception of the argument.
Some recent studies in NLP have investigated the effect of interactions on the overall persuasive power of posts in social media BIBREF10, BIBREF14. However, in social media not all posts have to express arguments or stay on topic BIBREF15, and qualitative evaluation of the posts can be influenced by many other factors such as interactions between the individuals BIBREF16. Therefore, it is difficult to measure the effect of argumentative pragmatic context alone in argument quality prediction without the effect of these confounding factors using the datasets and models currently available in this line of research.
In this paper, we study the role of kairos on argument quality prediction by examining the individual claims of an argument for their timeliness and appropriateness in the context of a particular line of argument. We define kairos as the sequence of argumentative text (e.g. claims) along a particular line of argumentative reasoning.
To start, we present a dataset extracted from kialo.com of over 47,000 claims that are part of a diverse collection of arguments on 741 controversial topics. The structure of the website dictates that each argument must present a supporting or opposing claim for its parent claim, and stay within the topic of the main thesis. Rather than being posts on a social media platform, these are community-curated claims. Furthermore, for each presented claim, the audience votes on its impact within the given line of reasoning. Critically then, the dataset includes the argument context for each claim, allowing us to investigate the characteristics associated with impactful arguments.
With the dataset in hand, we propose the task of studying the characteristics of impactful claims by (1) taking the argument context into account, (2) studying the extent to which this context is important, and (3) determining the representation of context that is more effective. To the best of our knowledge, ours is the first dataset that includes claims with both impact votes and the corresponding context of the argument.
<<</Introduction>>>
<<<Related Work>>>
Recent studies in computational argumentation have mainly focused on the tasks of identifying the structure of the arguments such as argument structure parsing BIBREF17, BIBREF18, and argument component classification BIBREF19, BIBREF20. More recently, there is an increased research interest to develop computational methods that can automatically evaluate qualitative characteristic of arguments, such as their impact and persuasive power BIBREF9, BIBREF10, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28. Consistent with findings in the social sciences and psychology, some of the work in NLP has shown that the impact and persuasive power of the arguments are not simply related to the linguistic characteristics of the language, but also on characteristics the source (ethos) BIBREF16 and the audience BIBREF12, BIBREF13. These studies suggest that perception of the arguments can be influenced by the credibility of the source, and the background of the audience.
It has also been shown, in social science studies, that kairos, which refers to the “timeliness” and “appropropriateness” of arguments and claims, is important to consider in studies of argument impact and persuasiveness BIBREF7, BIBREF8. One recent study in NLP has investigated the role of argument sequencing in argument persuasion looking at BIBREF14 Change My View, which is a social media platform where users post their views, and challenge other users to present arguments in an attempt to change their them. However, as stated in BIBREF15 many posts on social media platforms either do not express an argument, or diverge from the main topic of conversation. Therefore, it is difficult to measure the effect of pragmatic context in argument impact and persuasion, without confounding factors from using noisy social media data. In contrast, we provide a dataset of claims along with their structured argument path, which only consists of claims and corresponds to a particular line of reasoning for the given controversial topic. This structure enables us to study the characteristics of impactful claims, accounting for the effect of the pragmatic context.
Consistent with previous findings in the social sciences, we find that incorporating pragmatic and discourse context is important in computational studies of persuasion, as predictive models that with the context representation outperform models that only incorporate claim-specific linguistic features, in predicting the impact of a claim. Such a system that can predict the impact of a claim given an argumentative discourse, for example, could potentially be employed by argument retrieval and generation models which aims to pick or generate the most appropriate possible claim given the discourse.
<<</Related Work>>>
<<<Dataset>>>
Claims and impact votes. We collected 47,219 claims from kialo.com for 741 controversial topics and their corresponding impact votes. Impact votes are provided by the users of the platform to evaluate how impactful a particular claim is. Users can pick one of 5 possible impact labels for a particular claim: no impact, low impact, medium impact, high impact and very high impact. While evaluating the impact of a claim, users have access to the full argument context and therefore, they can assess how impactful a claim is in the given context of an argument. An interesting observation is that, in this dataset, the same claim can have different impact labels depending on the context in which it is presented.
Figure FIGREF1 shows a partial argument tree for the argument thesis “Physical torture of prisoners is an acceptable interrogation tool.”. Each node in the argument tree corresponds to a claim, and these argument trees are constructed and edited collaboratively by the users of the platform.
Except the thesis, every claim in the argument tree either opposes or supports its parent claim. Each path from the root to leaf nodes corresponds to an argument path which represents a particular line of reasoning on the given controversial topic.
Moreover, each claim has impact votes assigned by the users of the platform. The impact votes evaluate how impactful a claim is within its context, which consists of its predecessor claims from the thesis of the tree. For example, claim O1 “It is morally wrong to harm a defenseless person” is an opposing claim for the thesis and it is an impactful claim since most of its impact votes belong to the category of very high impact. However, claim S3 “It is illegitimate for state actors to harm someone without the process” is a supporting claim for its parent O1 and it is a less impactful claim since most of the impact votes belong to the no impact and low impact categories.
Distribution of impact votes. The distribution of claims with the given range of number of impact votes are shown in Table TABREF5. There are 19,512 claims in total with 3 or more votes. Out of the claims with 3 or more votes, majority of them have 5 or more votes. We limit our study to the claims with at least 5 votes to have a more reliable assignment for the accumulated impact label for each claim.
Impact label statistics. Table TABREF7 shows the distribution of the number of votes for each of the impact categories. The claims have $241,884$ total votes. The majority of the impact votes belong to medium impact category. We observe that users assign more high impact and very high impact votes than low impact and no impact votes respectively. When we restrict the claims to the ones with at least 5 impact votes, we have $213,277$ votes in total.
Agreement for the impact votes. To determine the agreement in assigning the impact label for a particular claim, for each claim, we compute the percentage of the votes that are the same as the majority impact vote for that claim. Let $c_{i}$ denote the count of the claims with the class labels C=[no impact, low impact, medium impact, high impact, very high impact] for the impact label $l$ at index $i$.
For example, for claim S1 in Figure FIGREF1, the agreement score is $100 * \frac{30}{90}\%=33.33\%$ since the majority class (no impact) has 30 votes and there are 90 impact votes in total for this particular claim. We compute the agreement score for the cases where (1) we treat each impact label separately (5-class case) and (2) we combine the classes high impact and very high impact into a one class: impactful and no impact and low impact into a one class: not impactful (3-class case).
Table TABREF6 shows the number of claims with the given agreement score thresholds when we include the claims with at least 5 votes. We see that when we combine the low impact and high impact classes, there are more claims with high agreement score. This may imply that distinguishing between no impact-low impact and high impact-very high impact classes is difficult. To decrease the sparsity issue, in our experiments, we use 3-class representation for the impact labels. Moreover, to have a more reliable assignment of impact labels, we consider only the claims with have more than 60% agreement.
Context. In an argument tree, the claims from the thesis node (root) to each leaf node, form an argument path. This argument path represents a particular line of reasoning for the given thesis. Similarly, for each claim, all the claims along the path from the thesis to the claim, represent the context for the claim. For example, in Figure FIGREF1, the context for O1 consists of only the thesis, whereas the context for S3 consists of both the thesis and O1 since S3 is provided to support the claim O1 which is an opposing claim for the thesis.
The claims are not constructed independently from their context since they are written in consideration with the line of reasoning so far. In most cases, each claim elaborates on the point made by its parent and presents cases to support or oppose the parent claim's points. Similarly, when users evaluate the impact of a claim, they consider if the claim is timely and appropriate given its context. There are cases in the dataset where the same claim has different impact labels, when presented within a different context. Therefore, we claim that it is not sufficient to only study the linguistic characteristic of a claim to determine its impact, but it is also necessary to consider its context in determining the impact.
Context length ($\text{C}_{l}$) for a particular claim C is defined by number of claims included in the argument path starting from the thesis until the claim C. For example, in Figure FIGREF1, the context length for O1 and S3 are 1 and 2 respectively. Table TABREF8 shows number of claims with the given range of context length for the claims with more than 5 votes and $60\%$ agreement score. We observe that more than half of these claims have 3 or higher context length.
<<</Dataset>>>
<<<Methodology>>>
<<<Hypothesis and Task Description>>>
Similar to prior work, our aim is to understand the characteristics of impactful claims in argumentation. However, we hypothesize that the qualitative characteristics of arguments is not independent of the context in which they are presented. To understand the relationship between argument context and the impact of a claim, we aim to incorporate the context along with the claim itself in our predictive models.
Prediction task. Given a claim, we want to predict the impact label that is assigned to it by the users: not impactful, medium impact, or impactful.
Preprocessing. We restrict our study to claims with at least 5 or more votes and greater than $60\%$ agreement, to have a reliable impact label assignment. We have $7,386$ claims in the dataset satisfying these constraints. We see that the impact class impacful is the majority class since around $58\%$ of the claims belong to this category.
For our experiments, we split our data to train (70%), validation (15%) and test (15%) sets.
<<</Hypothesis and Task Description>>>
<<<Baseline Models>>>
<<<Majority>>>
The majority baseline assigns the most common label of the training examples (high impact) to every test example.
<<</Majority>>>
<<<SVM with RBF kernel>>>
Similar to BIBREF9, we experiment with SVM with RBF kernel, with features that represent (1) the simple characteristics of the argument tree and (2) the linguistic characteristics of the claim.
The features that represent the simple characteristics of the claim's argument tree include the distance and similarity of the claim to the thesis, the similarity of a claim with its parent, and the impact votes of the claim's parent claim. We encode the similarity of a claim to its parent and the thesis claim with the cosine similarity of their tf-idf vectors. The distance and similarity metrics aim to model whether claims which are more similar (i.e. potentially more topically relevant) to their parent claim or the thesis claim, are more impactful.
We encode the quality of the parent claim as the number of votes for each impact class, and incorporate it as a feature to understand if it is more likely for a claim to impactful given an impactful parent claim.
Linguistic features. To represent each claim, we extracted the linguistic features proposed by BIBREF9 such as tf-idf scores for unigrams and bigrams, ratio of quotation marks, exclamation marks, modal verbs, stop words, type-token ratio, hedging BIBREF29, named entity types, POS n-grams, sentiment BIBREF30 and subjectivity scores BIBREF31, spell-checking, readibility features such as Coleman-Liau BIBREF32, Flesch BIBREF33, argument lexicon features BIBREF34 and surface features such as word lengths, sentence lengths, word types, and number of complex words.
<<</SVM with RBF kernel>>>
<<<FastText>>>
joulin-etal-2017-bag introduced a simple, yet effective baseline for text classification, which they show to be competitive with deep learning classifiers in terms of accuracy. Their method represents a sequence of text as a bag of n-grams, and each n-gram is passed through a look-up table to get its dense vector representation. The overall sequence representation is simply an average over the dense representations of the bag of n-grams, and is fed into a linear classifier to predict the label. We use the code released by joulin-etal-2017-bag to train a classifier for argument impact prediction, based on the claim text.
<<</FastText>>>
<<<BiLSTM with Attention>>>
Another effective baseline BIBREF35, BIBREF36 for text classification consists of encoding the text sequence using a bidirectional Long Short Term Memory (LSTM) BIBREF37, to get the token representations in context, and then attending BIBREF38 over the tokens to get the sequence representation. For the query vector for attention, we use a learned context vector, similar to yang-etal-2016-hierarchical. We picked our hyperparameters based on performance on the validation set, and report our results for the best set of hyperparameters. We initialized our word embeddings with glove vectors BIBREF39 pre-trained on Wikipedia + Gigaword, and used the Adam optimizer BIBREF40 with its default settings.
<<</BiLSTM with Attention>>>
<<</Baseline Models>>>
<<<Fine-tuned BERT model>>>
devlin2018bert fine-tuned a pre-trained deep bi-directional transformer language model (which they call BERT), by adding a simple classification layer on top, and achieved state of the art results across a variety of NLP tasks. We employ their pre-trained language models for our task and compare it to our baseline models. For all the architectures described below, we finetune for 10 epochs, with a learning rate of 2e-5. We employ an early stopping procedure based on the model performance on a validation set.
<<<Claim with no context>>>
In this setting, we attempt to classify the impact of the claim, based on the text of the claim only. We follow the fine-tuning procedure for sequence classification detailed in BIBREF41, and input the claim text as a sequence of tokens preceded by the special [CLS] token and followed by the special [SEP] token. We add a classification layer on top of the BERT encoder, to which we pass the representation of the [CLS] token, and fine-tune this for argument impact prediction.
<<</Claim with no context>>>
<<<Claim with parent representation>>>
In this setting, we use the parent claim's text, in addition to the target claim text, in order to classify the impact of the target claim. We treat this as a sequence pair classification task, and combine both the target claim and parent claim as a single sequence of tokens, separated by the special separator [SEP]. We then follow the same procedure above, for fine-tuning.
<<</Claim with parent representation>>>
<<<Incorporating larger context>>>
In this setting, we consider incorporating a larger context from the discourse, in order to assess the impact of a claim. In particular, we consider up to four previous claims in the discourse (for a total context length of 5). We attempt to incorporate larger context into the BERT model in three different ways.
Flat representation of the path. The first, simple approach is to represent the entire path (claim + context) as a single sequence, where each of the claims is separated by the [SEP] token. BERT was trained on sequence pairs, and therefore the pre-trained encoders only have two segment embeddings BIBREF41. So to fit multiple sequences into this framework, we indicate all tokens of the target claim as belonging to segment A and the tokens for all the claims in the discourse context as belonging to segment B. This way of representing the input, requires no additional changes to the architecture or retraining, and we can just finetune in a similar manner as above. We refer to this representation of the context as a flat representation, and denote the model as $\text{Context}_{f}(i)$, where $i$ indicates the length of the context that is incorporated into the model.
Attention over context. Recent work in incorporating argument sequence in predicting persuasiveness BIBREF14 has shown that hierarchical representations are effective in representing context. Similarly, we consider hierarchical representations for representing the discourse. We first encode each claim using the pre-trained BERT model as the claim encoder, and use the representation of the [CLS] token as claim representation. We then employ dot-product attention BIBREF38, to get a weighted representation for the context. We use a learned context vector as the query, for computing attention scores, similar to yang-etal-2016-hierarchical. The attention score $\alpha _c$ is computed as shown below:
Where $V_c$ is the claim representation that was computed with the BERT encoder as described above, $V_l$ is the learned context vector that is used for computing attention scores, and $D$ is the set of claims in the discourse. After computing the attention scores, the final context representation $v_d$ is computed as follows:
We then concatenate the context representation with the target claim representation $[V_d, V_r]$ and pass it to the classification layer to predict the quality. We denote this model as $\text{Context}_{a}(i)$.
GRU to encode context Similar to the approach above, we consider a hierarchical representation for representing the context. We compute the claim representations, as detailed above, and we then feed the discourse claims' representations (in sequence) into a bidirectional Gated Recurrent Unit (GRU) BIBREF42, to compute the context representation. We concatenate this with the target claim representation and use this to predict the claim impact. We denote this model as $\text{Context}_{gru}(i)$.
<<</Incorporating larger context>>>
<<</Fine-tuned BERT model>>>
<<</Methodology>>>
<<<Results and Analysis>>>
Table TABREF21 shows the macro precision, recall and F1 scores for the baselines as well as the BERT models with and without context representations.
We see that parent quality is a simple yet effective feature and SVM model with this feature can achieve significantly higher ($p<0.001$) F1 score ($46.61\%$) than distance from the thesis and linguistic features. Claims with higher impact parents are more likely to be have higher impact. Similarity with the parent and thesis is not significantly better than the majority baseline. Although the BiLSTM model with attention and FastText baselines performs better than the SVM with distance from the thesis and linguistic features, it has similar performance to the parent quality baseline.
We find that the BERT model with claim only representation performs significantly better ($p<0.001$) than the baseline models. Incorporating the parent representation only along with the claim representation does not give significant improvement over representing the claim only. However, incorporating the flat representation of the larger context along with the claim representation consistently achieves significantly better ($p<0.001$) performance than the claim representation alone. Similarly, attention representation over the context with the learned query vector achieves significantly better performance then the claim representation only ($p<0.05$).
We find that the flat representation of the context achieves the highest F1 score. It may be more difficult for the models with a larger number of parameters to perform better than the flat representation since the dataset is small. We also observe that modeling 3 claims on the argument path before the target claim achieves the best F1 score ($55.98\%$).
To understand for what kinds of claims the best performing contextual model is more effective, we evaluate the BERT model with flat context representation for claims with context length values 1, 2, 3 and 4 separately. Table TABREF26 shows the F1 score of the BERT model without context and with flat context representation with different lengths of context. For the claims with context length 1, adding $\text{Context}_{f}(3)$ and $\text{Context}_{f}(4)$ representation along with the claim achieves significantly better $(p<0.05)$ F1 score than modeling the claim only. Similarly for the claims with context length 3 and 4, $\text{Context}_{f}(4)$ and $\text{Context}_{f}(3)$ perform significantly better than BERT with claim only ($(p<0.05)$ and $(p<0.01)$ respectively). We see that models with larger context are helpful even for claims which have limited context (e.g. $\text{C}_{l}=1$). This may suggest that when we train the models with larger context, they learn how to represent the claims and their context better.
<<</Results and Analysis>>>
<<<Conclusion>>>
In this paper, we present a dataset of claims with their corresponding impact votes, and investigate the role of argumentative discourse context in argument impact classification. We experiment with various models to represent the claims and their context and find that incorporating the context information gives significant improvement in predicting argument impact. In our study, we find that flat representation of the context gives the best improvement in the performance and our analysis indicates that the contextual models perform better even for the claims with limited context.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nDataset\nMethodology\nHypothesis and Task Description\nBaseline Models\nMajority\nSVM with RBF kernel\nFastText\nBiLSTM with Attention\nFine-tuned BERT model\nClaim with no context\nClaim with parent representation\nIncorporating larger context\nResults and Analysis\nConclusion"
],
"type": "outline"
}
|
1910.12618
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Textual Data for Time Series Forecasting
<<<Abstract>>>
While ubiquitous, textual sources of information such as company reports, social media posts, etc. are hardly included in prediction algorithms for time series, despite the relevant information they may contain. In this work, openly accessible daily weather reports from France and the United-Kingdom are leveraged to predict time series of national electricity consumption, average temperature and wind-speed with a single pipeline. Two methods of numerical representation of text are considered, namely traditional Term Frequency - Inverse Document Frequency (TF-IDF) as well as our own neural word embedding. Using exclusively text, we are able to predict the aforementioned time series with sufficient accuracy to be used to replace missing data. Furthermore the proposed word embeddings display geometric properties relating to the behavior of the time series and context similarity between words.
<<</Abstract>>>
<<<Introduction>>>
Whether it is in the field of energy, finance or meteorology, accurately predicting the behavior of time series is nowadays of paramount importance for optimal decision making or profit. While the field of time series forecasting is extremely prolific from a research point-of-view, up to now it has narrowed its efforts on the exploitation of regular numerical features extracted from sensors, data bases or stock exchanges. Unstructured data such as text on the other hand remains underexploited for prediction tasks, despite its potentially valuable informative content. Empirical studies have already proven that textual sources such as news articles or blog entries can be correlated to stock exchange time series and have explanatory power for their variations BIBREF0, BIBREF1. This observation has motivated multiple extensive experiments to extract relevant features from textual documents in different ways and use them for prediction, notably in the field of finance. In Lavrenko et al. BIBREF2, language models (considering only the presence of a word) are used to estimate the probability of trends such as surges or falls of 127 different stock values using articles from Biz Yahoo!. Their results show that this text driven approach could be used to make profit on the market. One of the most conventional ways for text representation is the TF-IDF (Term Frequency - Inverse Document Frequency) approach. Authors have included such features derived from news pieces in multiple traditional machine learning algorithms such as support vector machines (SVM) BIBREF3 or logistic regression BIBREF4 to predict the variations of financial series again. An alternative way to encode the text is through latent Dirichlet allocation (LDA) BIBREF5. It assigns topic probabilities to a text, which can be used as inputs for subsequent tasks. This is for instance the case in Wang's aforementioned work (alongside TF-IDF). In BIBREF6, the authors used Reuters news encoded by LDA to predict if NASDAQ and Dow Jones closing prices increased or decreased compared to the opening ones. Their empirical results show that this approach was efficient to improve the prediction of stock volatility. More recently Kanungsukkasem et al. BIBREF7 introduced a variant of the LDA graphical model, named FinLDA, to craft probabilities that are specifically tailored for a financial time series prediction task (although their approach could be generalized to other ones). Their results showed that indeed performance was better when using probabilities from their alternative than those of the original LDA. Deep learning with its natural ability to work with text through word embeddings has also been used for time series prediction with text. Combined with traditional time series features, the authors of BIBREF8 derived sentiment features from a convolutional neural network (CNN) to reduce the prediction error of oil prices. Akita et al. BIBREF9 represented news articles through the use of paragraph vectors BIBREF10 in order to predict 10 closing stock values from the Nikkei 225. While in the case of financial time series the existence of specialized press makes it easy to decide which textual source to use, it is much more tedious in other fields. Recently in Rodrigues et al. BIBREF11, short description of events (such as concerts, sports matches, ...) are leveraged through a word embedding and neural networks in addition to more traditional features. Their experiments show that including the text can bring an improvement of up to 2% of root mean squared error compared to an approach without textual information. Although the presented studies conclude on the usefulness of text to improve predictions, they never thoroughly analyze which aspects of the text are of importance, keeping the models as black-boxes.
The field of electricity consumption is one where expert knowledge is broad. It is known that the major phenomena driving the load demand are calendar (time of the year, day of the week, ...) and meteorological. For instance generalized additive models (GAM) BIBREF12 representing the consumption as a sum of functions of the time of the year, temperature and wind speed (among others) typically yield less than 1.5% of relative error for French national electricity demand and 8% for local one BIBREF13, BIBREF14. Neural networks and their variants, with their ability to extract patterns from heterogeneous types of data have also obtained state-of-the-art results BIBREF15, BIBREF16, BIBREF17. However to our knowledge no exploratory work using text has been conducted yet. Including such data in electricity demand forecasting models would not only contribute to close the gap with other domains, but also help to understand better which aspects of text are useful, how the encoding of the text influences forecasts and to which extend a prediction algorithm can extract relevant information from unstructured data. Moreover the major drawback of all the aforementioned approaches is that they require meteorological data that may be difficult to find, unavailable in real time or expensive. Textual sources such as weather reports on the other hand are easy to find, usually available on a daily basis and free.
The main contribution of our paper is to suggest the use of a certain type of textual documents, namely daily weather report, to build forecasters of the daily national electricity load, average temperature and wind speed for both France and the United-Kingdom (UK). Consequently this work represents a significant break with traditional methods, and we do not intend to best state-of-the-art approaches. Textual information is naturally more fuzzy than numerical one, and as such the same accuracy is not expected from the presented approaches. With a single text, we were already able to predict the electricity consumption with a relative error of less than 5% for both data sets. Furthermore, the quality of our predictions of temperature and wind speed is satisfying enough to replace missing or unavailable data in traditional models. Two different approaches are considered to represent the text numerically, as well as multiple forecasting algorithms. Our empirical results are consistent across encoding, methods and language, thus proving the intrinsic value weather reports have for the prediction of the aforementioned time series. Moreover, a major distinction between previous works is our interpretation of the models. We quantify the impact of a word on the forecast and analyze the geometric properties of the word embedding we trained ourselves. Note that although multiple time series are discussed in our paper, the main focus of this paper remains electricity consumption. As such, emphasis is put on the predictive results on the load demand time series.
The rest of this paper is organized as follows. The following section introduces the two data sets used to conduct our study. Section 3 presents the different machine learning approaches used and how they were tuned. Section 4 highlights the main results of our study, while section 5 concludes this paper and gives insight on future possible work.
<<</Introduction>>>
<<<Presentation of the data>>>
In order to prove the consistency of our work, experiments have been conducted on two data sets, one for France and the other for the UK. In this section details about the text and time series data are given, as well as the major preprocessing steps.
<<<Time Series>>>
Three types of time series are considered in our work: national net electricity consumption (also referred as load or demand), national temperature and wind speed. The load data sets were retrieved on the websites of the respective grid operators, respectively RTE (Réseau et Transport d'Électricité) for France and National Grid for the UK. For France, the available data ranges from January the 1st 2007 to August the 31st 2018. The default temporal resolution is 30 minutes, but it is averaged to a daily one. For the UK, it is available from January the 1st 2006 to December the 31st 2018 with the same temporal resolution and thus averaging. Due to social factors such as energy policies or new usages of electricity (e.g. Electric Vehicles), the net consumption usually has a long-term trend (fig. FIGREF2). While for France it seems marginal (fig. FIGREF2), there is a strong decreasing trend for the United-Kingdom (fig. FIGREF2). Such a strong non-stationarity of the time series would cause problems for the forecasting process, since the learnt demand levels would differ significantly from the upcoming ones. Therefore a linear regression was used to approximate the decreasing trend of the net consumption in the UK. It is then subtracted before the training of the methods, and then re-added a posteriori for prediction.
As for the weather time series, they were extracted from multiple weather stations around France and the UK. The national average is obtained by combining the data from all stations with a weight proportional to the city population the station is located in. For France the stations' data is provided by the French meteorological office, Météo France, while the British ones are scrapped from stations of the National Oceanic and Atmospheric Administration (NOAA). Available on the same time span as the consumption, they usually have a 3 hours temporal resolution but are averaged to a daily one as well. Finally the time series were scaled to the range $[0,1]$ before the training phase, and re-scaled during prediction time.
<<</Time Series>>>
<<<Text>>>
Our work aims at predicting time series using exclusively text. Therefore for both countries the inputs of all our models consist only of written daily weather reports. Under their raw shape, those reports take the form of PDF documents giving a short summary of the country's overall weather, accompanied by pressure, temperature, wind, etc. maps. Note that those reports are written a posteriori, although they could be written in a predictive fashion as well. The reports are published by Météo France and the Met Office, its British counterpart. They are publicly available on the respective websites of the organizations. Both corpora span on the same period as the corresponding time series and given their daily nature, it yields a total of 4,261 and 4,748 documents respectively. An excerpt for each language may be found in tables TABREF6 and TABREF7. The relevant text was extracted from the PDF documents using the Python library PyPDF2.
As emphasized in many studies, preprocessing of the text can ease the learning of the methods and improve accuracy BIBREF18. Therefore the following steps are applied: removal of non-alphabetic characters, removal of stop-words and lowercasing. While it was often highlighted that word lemmatization and stemming improve results, initial experiments showed it was not the case for our study. This is probably due to the technical vocabulary used in both corpora pertaining to the field of meteorology. Already limited in size, the aforementioned preprocessing operations do not yield a significant vocabulary size reduction and can even lead to a loss of linguistic meaning. Finally, extremely frequent or rare words may not have high explanatory power and may reduce the different models' accuracy. That is why words appearing less than 7 times or in more than 40% of the (learning) corpus are removed as well. Figure FIGREF8 represents the distribution of the document lengths after preprocessing, while table TABREF11 gives descriptive statistics on both corpora. Note that the preprocessing steps do not heavily rely on the considered language: therefore our pipeline is easily adaptable for other languages.
<<</Text>>>
<<</Presentation of the data>>>
<<<Modeling and forecasting framework>>>
A major target of our work is to show the reports contain an intrinsic information relevant for time series, and that the predictive results do not heavily depend on the encoding of the text or the machine learning algorithm used. Therefore in this section we present the text encoding approaches, as well as the forecasting methods used with them.
<<<Numerical Encoding of the Text>>>
Machines and algorithms cannot work with raw text directly. Thus one major step when working with text is the choice of its numerical representation. In our work two significantly different encoding approaches are considered. The first one is the TF-IDF approach. It embeds a corpus of $N$ documents and $V$ words into a matrix $X$ of size $N \times V$. As such, every document is represented by a vector of size $V$. For each word $w$ and document $d$ the associated coefficient $x_{d,w}$ represents the frequency of that word in that document, penalized by its overall frequency in the rest of the corpus. Thus very common words will have a low TF-IDF value, whereas specific ones which will appear often in a handful of documents will have a large TF-IDF score. The exact formula to calculate the TF-IDF value of word $w$ in document $d$ is:
where $f_{d,w}$ is the number of appearances of $w$ in $d$ adjusted by the length of $d$ and $\#\lbrace d: w \in d \rbrace $ is the number of documents in which the word $w$ appears. In our work we considered only individual words, also commonly referred as 1-grams in the field of natural language processing (NLP). The methodology can be easily extended to $n$-grams (groups of $n$ consecutive words), but initial experiments showed that it did not bring any significant improvement over 1-grams.
The second representation is a neural word embedding. It consists in representing every word in the corpus by a real-valued vector of dimension $q$. Such models are usually obtained by learning a vector representation from word co-occurrences in a very large corpus (typically hundred thousands of documents, such as Wikipedia articles for example). The two most popular embeddings are probably Google's Word2Vec BIBREF19 and Standford's GloVe BIBREF20. In the former, a neural network is trained to predict a word given its context (continuous bag of word model), whereas in the latter a matrix factorization scheme on the log co-occurences of words is applied. In any case, the very nature of the objective function allows the embedding models to learn to translate linguistic similarities into geometric properties in the vector space. For instance the vector $\overrightarrow{king} - \overrightarrow{man} + \overrightarrow{woman}$ is expected to be very close to the vector $\overrightarrow{queen}$. However in our case we want a vector encoding which is tailored for the technical vocabulary of our weather reports and for the subsequent prediction task. This is why we decided to train our own word embedding from scratch during the learning phase of our recurrent or convolutional neural network. Aside from the much more restricted size of our corpora, the major difference with the aforementioned embeddings is that in our case it is obtained by minimizing a squared loss on the prediction. In that framework there is no explicit reason for our representation to display any geometric structure. However as detailed in section SECREF36, our word vectors nonetheless display geometric properties pertaining to the behavior of the time series.
<<</Numerical Encoding of the Text>>>
<<<Machine Learning Algorithms>>>
Multiple machine learning algorithms were applied on top of the encoded textual documents. For the TF-IDF representation, the following approaches are applied: random forests (RF), LASSO and multilayer perceptron (MLP) neural networks (NN). We chose these algorithms combined to the TF-IDF representation due to the possibility of interpretation they give. Indeed, considering the novelty of this work, the understanding of the impact of the words on the forecast is of paramount importance, and as opposed to embeddings, TF-IDF has a natural interpretation. Furthermore the RF and LASSO methods give the possibility to interpret marginal effects and analyze the importance of features, and thus to find the words which affect the time series the most.
As for the word embedding, recurrent or convolutional neural networks (respectively RNN and CNN) were used with them. MLPs are not used, for they would require to concatenate all the vector representations of a sentence together beforehand and result in a network with too many parameters to be trained correctly with our number of available documents. Recall that we decided to train our own vector representation of words instead of using an already available one. In order to obtain the embedding, the texts are first converted into a sequence of integers: each word is given a number ranging from 1 to $V$, where $V$ is the vocabulary size (0 is used for padding or unknown words in the test set). One must then calculate the maximum sequence length $S$, and sentences of length shorter than $S$ are then padded by zeros. During the training process of the network, for each word a $q$ dimensional real-valued vector representation is calculated simultaneously to the rest of the weights of the network. Ergo a sentence of $S$ words is translated into a sequence of $S$ $q$-sized vectors, which is then fed into a recurrent neural unit. For both languages, $q=20$ seemed to yield the best results. In the case of recurrent units two main possibilities arise, with LSTM (Long Short-Term Memory) BIBREF21 and GRU (Gated Recurrent Unit) BIBREF22. After a few initial trials, no significant performance differences were noticed between the two types of cells. Therefore GRU were systematically used for recurrent networks, since their lower amount of parameters makes them easier to train and reduces overfitting. The output of the recurrent unit is afterwards linked to a fully connected (also referred as dense) layer, leading to the final forecast as output. The rectified linear unit (ReLU) activation in dense layers systematically gave the best results, except on the output layer where we used a sigmoid one considering the time series' normalization. In order to tone down overfitting, dropout layers BIBREF23 with probabilities of 0.25 or 0.33 are set in between the layers. Batch normalization BIBREF24 is also used before the GRU since it stabilized training and improved performance. Figure FIGREF14 represents the architecture of our RNN.
The word embedding matrix is therefore learnt jointly with the rest of the parameters of the neural network by minimization of the quadratic loss with respect to the true electricity demand. Note that while above we described the case of the RNN, the same procedure is considered for the case of the CNN, with only the recurrent layers replaced by a combination of 1D convolution and pooling ones. As for the optimization algorithms of the neural networks, traditional stochastic gradient descent with momentum or ADAM BIBREF25 together with a quadratic loss are used. All of the previously mentioned methods were coded with Python. The LASSO and RF were implemented using the library Scikit Learn BIBREF26, while Keras BIBREF27 was used for the neural networks.
<<</Machine Learning Algorithms>>>
<<<Hyperparameter Tuning>>>
While most parameters are trained during the learning optimization process, all methods still involve a certain number of hyperparameters that must be manually set by the user. For instance for random forests it can correspond to the maximum depth of the trees or the fraction of features used at each split step, while for neural networks it can be the number of layers, neurons, the embedding dimension or the activation functions used. This is why the data is split into three sets:
The training set, using all data available up to the 31st of December 2013 (2,557 days for France and 2,922 for the UK). It is used to learn the parameters of the algorithms through mathematical optimization.
The years 2014 and 2015 serve as validation set (730 days). It is used to tune the hyperparameters of the different approaches.
All the data from January the 1st 2016 (974 days for France and 1,096 for the UK) is used as test set, on which the final results are presented.
Grid search is applied to find the best combination of values: for each hyperparameter, a range of values is defined, and all the possible combinations are successively tested. The one yielding the lowest RMSE (see section SECREF4) on the validation set is used for the final results on the test one. While relatively straightforward for RFs and the LASSO, the extreme number of possibilities for NNs and their extensive training time compelled us to limit the range of architectures possible. The hyperparameters are tuned per method and per country: ergo the hyperparameters of a given algorithm will be the same for the different time series of a country (e.g. the RNN architecture for temperature and load for France will be the same, but different from the UK one). Finally before application on the testing set, all the methods are re-trained from scratch using both the training and validation data.
<<</Hyperparameter Tuning>>>
<<</Modeling and forecasting framework>>>
<<<Experiments>>>
The goal of our experiments is to quantify how close one can get using textual data only when compared to numerical data. However the inputs of the numerical benchmark should be hence comparable to the information contained in the weather reports. Considering they mainly contain calendar (day of the week and month) as well as temperature and wind information, the benchmark of comparison is a random forest trained on four features only: the time of the year (whose value is 0 on January the 1st and 1 on December the 31st with a linear growth in between), the day of the week, the national average temperature and wind speed. The metrics of evaluation are the Mean Absolute Percentage Error (MAPE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE) and the $R^2$ coefficient given by:
where $T$ is the number of test samples, $y_t$ and $\hat{y}_t$ are respectively the ground truth and the prediction for the document of day $t$, and $\overline{y}$ is the empirical average of the time series over the test sample. A known problem with MAPE is that it unreasonably increases the error score for values close to 0. While for the load it isn't an issue at all, it can be for the meteorological time series. Therefore for the temperature, the MAPE is calculated only when the ground truth is above the 5% empirical quantile. Although we aim at achieving the highest accuracy possible, we focus on the interpretability of our models as well.
<<<Feature selection>>>
Many words are obviously irrelevant to the time series in our texts. For instance the day of the week, while playing a significant role for the load demand, is useless for temperature or wind. Such words make the training harder and may decrease the accuracy of the prediction. Therefore a feature selection procedure similar to BIBREF28 is applied to select a subset of useful features for the different algorithms, and for each type of time series. Random forests are naturally able to calculate feature importance through the calculation of error increase in the out-of-bag (OOB) samples. Therefore the following process is applied to select a subset of $V^*$ relevant words to keep:
A RF is trained on the whole training & validation set. The OOB feature importance can thus be calculated.
The features are then successively added to the RF in decreasing order of feature importance.
This process is repeated $B=10$ times to tone down the randomness. The number $V^*$ is then set to the number of features giving the highest median OOB $R^2$ value.
The results of this procedure for the French data is represented in figure FIGREF24. The best median $R^2$ is achieved for $V^* = 52$, although one could argue that not much gain is obtained after 36 words. The results are very similar for the UK data set, thus for the sake of simplicity the same value $V^* = 52$ is used. Note that the same subset of words is used for all the different forecasting models, which could be improved in further work using other selection criteria (e.g. mutual information, see BIBREF29). An example of normalized feature importance is given in figure. FIGREF32.
<<</Feature selection>>>
<<<Main results>>>
Note that most of the considered algorithms involve randomness during the training phase, with the subsampling in the RFs or the gradient descent in the NNs for instance. In order to tone it down and to increase the consistency of our results, the different models are run $B=10$ times. The results presented hereafter correspond to the average and standard-deviation on those runs. The RF model denoted as "sel" is the one with the reduced number of features, whereas the other RF uses the full vocabulary. We also considered an aggregated forecaster (abridged Agg), consisting of the average of the two best individual ones in terms of RMSE. All the neural network methods have a reduced vocabulary size $V^*$. The results for the French and UK data are respectively given by tables TABREF26 and TABREF27.
Our empirical results show that for the electricity consumption prediction task, the order of magnitude of the relative error is around 5%, independently of the language, encoding and machine learning method, thus proving the intrinsic value of the information contained in the textual documents for this time series. As expected, all text based methods perform poorer than when using explicitly numerical input features. Indeed, despite containing relevant information, the text is always more fuzzy and less precise than an explicit value for the temperature or the time of the year for instance. Again the aim of this work is not to beat traditional methods with text, but quantifying how close one can come to traditional approaches when using text exclusively. As such achieving less than 5% of MAPE was nonetheless deemed impressive by expert electricity forecasters. Feature selection brings significant improvement in the French case, although it does not yield any improvement in the English one. The reason for this is currently unknown. Nevertheless the feature selection procedure also helps the NNs by dramatically reducing the vocabulary size, and without it the training of the networks was bound to fail. While the errors accross methods are roughly comparable and highlight the valuable information contained within the reports, the best method nonetheless fluctuates between languages. Indeed in the French case there is a hegemony of the NNs, with the embedding RNN edging the MLP TF-IDF one. However for the UK data set the RFs yield significantly better results on the test set than the NNs. This inversion of performance of the algorithms is possibly due to a change in the way the reports were written by the Met Office after August 2017, since the results of the MLP and RNN on the validation set (not shown here) were satisfactory and better than both RFs. For the two languages both the CNN and the LASSO yielded poor results. For the former, it is because despite grid search no satisfactory architecture was found, whereas the latter is a linear approach and was used more for interpretation purposes than strong performance. Finally the naive aggregation of the two best experts always yields improvement, especially for the French case where the two different encodings are combined. This emphasises the specificity of the two representations leading to different types of errors. An example of comparison between ground truth and forecast for the case of electricity consumption is given for the French language with fig. FIGREF29, while another for temperature may be found in the appendix FIGREF51. The sudden "spikes" in the forecast are due to the presence of winter related words in a summer report. This is the case when used in comparisons, such as "The flood will be as severe as in January" in a June report and is a limit of our approach. Finally, the usual residual $\hat{\varepsilon }_t = y_t - \hat{y}_t$ analyses procedures were applied: Kolmogorov normality test, QQplots comparaison to gaussian quantiles, residual/fit comparison... While not thoroughly gaussian, the residuals were close to normality nonetheless and displayed satisfactory properties such as being generally independent from the fitted and ground truth values. Excerpts of this analysis for France are given in figure FIGREF52 of the appendix. The results for the temperature and wind series are given in appendix. Considering that they have a more stochastic behavior and are thus more difficult to predict, the order of magnitude of the errors differ (the MAPE being around 15% for temperature for instance) but globally the same observations can be made.
<<</Main results>>>
<<<Interpretability of the models>>>
While accuracy is the most relevant metric to assess forecasts, interpretability of the models is of paramount importance, especially in the field of professional electricity load forecasting and considering the novelty of our work. Therefore in this section we discuss the properties of the RF and LASSO models using the TF-IDF encoding scheme, as well as the RNN word embedding.
<<<TF-IDF representation>>>
One significant advantage of the TF-IDF encoding when combined with random forests or the LASSO is that it is possible to interpret the behavior of the models. For instance, figure FIGREF32 represents the 20 most important features (in the RF OOB sense) for both data sets when regressing over electricity demand data. As one can see, the random forest naturally extracts calendar information contained in the weather reports, since months or week-end days are among the most important ones. For the former, this is due to the periodic behavior of electricity consumption, which is higher in winter and lower in summer. This is also why characteristic phenomena of summer and winter, such as "thunderstorms", "snow" or "freezing" also have a high feature importance. The fact that August has a much more important role than July also concurs with expert knowledge, especially for France: indeed it is the month when most people go on vacations, and thus when the load drops the most. As for the week-end names, it is due to the significantly different consumer behavior during Saturdays and especially Sundays when most of the businesses are closed and people are usually at home. Therefore the relevant words selected by the random forest are almost all in agreement with expert knowledge.
We also performed the analysis of the relevant words for the LASSO. In order to do that, we examined the words $w$ with the largest associated coefficients $\beta _w$ (in absolute value) in the regression. Since the TF-IDF matrix has positive coefficients, it is possible to interpret the sign of the coefficient $\beta _w$ as its impact on the time series. For instance if $\beta _w > 0$ then the presence of the word $w$ causes a rise the time series (respectively if $\beta _w < 0$, it entails a decline). The results are plotted fig. FIGREF35 for the the UK. As one can see, the winter related words have positive coefficients, and thus increase the load demand as expected whereas the summer related ones decrease it. The value of the coefficients also reflects the impact on the load demand. For example January and February have the highest and very similar values, which concurs with the similarity between the months. Sunday has a much more negative coefficient than Saturday, since the demand significantly drops during the last day of the week. The important words also globally match between the LASSO and the RF, which is a proof of the consistency of our results (this is further explored afterwards in figure FIGREF43). Although not presented here, the results are almost identical for the French load, with approximately the same order of relevancy. The important words logically vary in function of the considered time series, but are always coherent. For instance for the wind one, terms such as "gales", "windy" or "strong" have the highest positive coefficients, as seen in the appendix figure FIGREF53. Those results show that a text based approach not only extracts the relevant information by itself, but it may eventually be used to understand which phenomena are relevant to explain the behavior of a time series, and to which extend.
<<</TF-IDF representation>>>
<<<Vector embedding representation>>>
Word vector embeddings such as Word2Vec and GloVe are known for their vectorial properties translating linguistic ones. However considering the objective function of our problem, there was no obvious reason for such attributes to appear in our own. Nevertheless for both languages we conducted an analysis of the geometric properties of our embedding matrix. We investigated the distances between word vectors, the relevant metric being the cosine distance given by:
where $\overrightarrow{w_1}$ and $\overrightarrow{w_2}$ are given word vectors. Thus a cosine distance lower than 1 means similarity between word vectors, whereas a greater than 1 corresponds to opposition.
The initial analyses of the embedding matrices for both the UK and France revealed that in general, words were grouped by context or influence on the electricity consumption. For instance, we observed that winter words were together and far away from summer ones. Week days were grouped as well and far from week-end days. However considering the vocabulary was reduced to $V^* = 52$ words, those results lacked of consistency. Therefore for both languages we decided to re-train the RNNs using the same architecture, but with a larger vocabulary of the $V=300$ most relevant words (still in the RF sense) and on all the available data (i.e. everything is used as training) to compensate for the increased size of the vocabulary. We then calculated the distance of a few prominent words to the others. The analysis of the average cosine distance over $B=10$ runs for three major words is given by tables TABREF38 and TABREF39, and three other examples are given in the appendix tables TABREF57 and TABREF58. The first row corresponds to the reference word vector $\overrightarrow{w_1}$ used to calculate the distance from (thus the distance is always zero), while the following ones are the 9 closest to it. The two last rows correspond to words we deemed important to check the distance with (an antagonistic one or relevant one not in the top 9 for instance).
The results of the experiments are very similar for both languages again. Indeed, the words are globally embedded in the vector space by topic: winter related words such as "January" ("janvier"), "February" ("février"), "snow" ("neige"), "freezing" ("glacial") are close to each other and almost opposite to summer related ones such as "July" ("juillet"), "August" ("août"), "hot" ("chaud"). For both cases the week days Monday ("lundi") to Friday ("vendredi") are grouped very closely to each other, while significantly separated from the week-end ones "Saturday" ("samedi") and "Sunday" ("dimanche"). Despite these observations, a few seemingly unrelated words enter the lists of top 10, especially for the English case (such as "pressure" or "dusk" for "February"). In fact the French language embedding seems of better quality, which is perhaps linked to the longer length of the French reports in average. This issue could probably be addressed with more data. Another observation made is that the importance of a word $w$ seems related to its euclidean norm in the embedding space ${\overrightarrow{w}}_2$. For both languages the list of the 20 words with the largest norm is given fig. FIGREF40. As one can see, it globally matches the selected ones from the RF or the LASSO (especially for the French language), although the order is quite different. This is further supported by the Venn diagram of common words among the top 50 ones for each word selection method represented in figure FIGREF43 for France. Therefore this observation could also be used as feature selection procedure for the RNN or CNN in further work.
In order to achieve a global view of the embeddings, the t-SNE algorithm BIBREF30 is applied to project an embedding matrix into a 2 dimensional space, for both languages. The observations for the few aforementioned words are confirmed by this representation, as plotted in figure FIGREF44. Thematic clusters can be observed, roughly corresponding to winter, summer, week-days, week-end days for both languages. Globally summer and winter seem opposed, although one should keep in mind that the t-SNE representation does not preserve the cosine distance. The clusters of the French embedding appear much more compact than the UK one, comforting the observations made when explicitly calculating the cosine distances.
<<</Vector embedding representation>>>
<<</Interpretability of the models>>>
<<</Experiments>>>
<<<Conclusion>>>
In this study, a novel pipeline to predict three types of time series using exclusively a textual source was proposed. Making use of publicly available daily weather reports, we were able to predict the electricity consumption with less than 5% of MAPE for both France and the United-Kingdom. Moreover our average national temperature and wind speed predictions displayed sufficient accuracy to be used to replace missing data or as first approximation in traditional models in case of unavailability of meteorological features.
The texts were encoded numerically using either TF-IDF or our own neural word embedding. A plethora of machine learning algorithms such as random forests or neural networks were applied on top of those representations. Our results were consistent over language, numerical representation of the text and prediction algorithm, proving the intrinsic value of the textual sources for the three considered time series. Contrarily to previous works in the field of textual data for time series forecasting, we went in depth and quantified the impact of words on the variations of the series. As such we saw that all the algorithms naturally extract calendar and meteorological information from the texts, and that words impact the time series in the expected way (e.g. winter words increase the consumption and summer ones decrease it). Despite being trained on a regular quadratic loss, our neural word embedding spontaneously builds geometric properties. Not only does the norm of a word vector reflect its significance, but the words are also grouped by topic with for example winter, summer or day of the week clusters.
Note that this study was a preliminary work on the use of textual information for time series prediction, especially electricity demand one. The long-term goal is to include multiple sources of textual information to improve the accuracy of state-of-the-art methods or to build a text based forecaster which can be used to increase the diversity in a set of experts for electricity consumption BIBREF31. However due to the redundancy of the information of the considered weather reports with meteorological features, it may be necessary to consider alternative textual sources. The use of social media such as Facebook, Twitter or Instagram may give interesting insight and will therefore be investigated in future work.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nPresentation of the data\nTime Series\nText\nModeling and forecasting framework\nNumerical Encoding of the Text\nMachine Learning Algorithms\nHyperparameter Tuning\nExperiments\nFeature selection\nMain results\nInterpretability of the models\nTF-IDF representation\nVector embedding representation\nConclusion"
],
"type": "outline"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.